00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1063 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3730 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.075 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.076 The recommended git tool is: git 00:00:00.076 using credential 00000000-0000-0000-0000-000000000002 00:00:00.078 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.103 Fetching changes from the remote Git repository 00:00:00.104 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.133 Using shallow fetch with depth 1 00:00:00.133 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.133 > git --version # timeout=10 00:00:00.166 > git --version # 'git version 2.39.2' 00:00:00.166 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.192 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.192 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.229 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.238 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.248 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.248 > git config core.sparsecheckout # timeout=10 00:00:04.257 > git read-tree -mu HEAD # timeout=10 00:00:04.271 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.296 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.296 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.427 [Pipeline] Start of Pipeline 00:00:04.440 [Pipeline] library 00:00:04.442 Loading library shm_lib@master 00:00:04.442 Library shm_lib@master is cached. Copying from home. 00:00:04.459 [Pipeline] node 00:00:04.470 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:04.472 [Pipeline] { 00:00:04.481 [Pipeline] catchError 00:00:04.482 [Pipeline] { 00:00:04.494 [Pipeline] wrap 00:00:04.503 [Pipeline] { 00:00:04.512 [Pipeline] stage 00:00:04.514 [Pipeline] { (Prologue) 00:00:04.532 [Pipeline] echo 00:00:04.533 Node: VM-host-SM0 00:00:04.539 [Pipeline] cleanWs 00:00:04.548 [WS-CLEANUP] Deleting project workspace... 00:00:04.548 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.554 [WS-CLEANUP] done 00:00:04.751 [Pipeline] setCustomBuildProperty 00:00:04.835 [Pipeline] httpRequest 00:00:05.394 [Pipeline] echo 00:00:05.396 Sorcerer 10.211.164.20 is alive 00:00:05.403 [Pipeline] retry 00:00:05.405 [Pipeline] { 00:00:05.415 [Pipeline] httpRequest 00:00:05.418 HttpMethod: GET 00:00:05.419 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.420 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.428 Response Code: HTTP/1.1 200 OK 00:00:05.428 Success: Status code 200 is in the accepted range: 200,404 00:00:05.429 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.888 [Pipeline] } 00:00:08.905 [Pipeline] // retry 00:00:08.912 [Pipeline] sh 00:00:09.193 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.208 [Pipeline] httpRequest 00:00:09.621 [Pipeline] echo 00:00:09.623 Sorcerer 10.211.164.20 is alive 00:00:09.633 [Pipeline] retry 00:00:09.635 [Pipeline] { 00:00:09.649 [Pipeline] httpRequest 00:00:09.654 HttpMethod: GET 00:00:09.654 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.655 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.676 Response Code: HTTP/1.1 200 OK 00:00:09.677 Success: Status code 200 is in the accepted range: 200,404 00:00:09.677 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:03.620 [Pipeline] } 00:01:03.636 [Pipeline] // retry 00:01:03.643 [Pipeline] sh 00:01:03.926 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:06.534 [Pipeline] sh 00:01:06.815 + git -C spdk log --oneline -n5 00:01:06.815 c13c99a5e test: Various fixes for Fedora40 00:01:06.815 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:06.815 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:06.815 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:06.815 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:06.832 [Pipeline] withCredentials 00:01:06.842 > git --version # timeout=10 00:01:06.851 > git --version # 'git version 2.39.2' 00:01:06.866 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:06.867 [Pipeline] { 00:01:06.875 [Pipeline] retry 00:01:06.877 [Pipeline] { 00:01:06.891 [Pipeline] sh 00:01:07.171 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:07.182 [Pipeline] } 00:01:07.199 [Pipeline] // retry 00:01:07.204 [Pipeline] } 00:01:07.220 [Pipeline] // withCredentials 00:01:07.229 [Pipeline] httpRequest 00:01:07.673 [Pipeline] echo 00:01:07.675 Sorcerer 10.211.164.20 is alive 00:01:07.684 [Pipeline] retry 00:01:07.686 [Pipeline] { 00:01:07.700 [Pipeline] httpRequest 00:01:07.704 HttpMethod: GET 00:01:07.705 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:07.705 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:07.709 Response Code: HTTP/1.1 200 OK 00:01:07.710 Success: Status code 200 is in the accepted range: 200,404 00:01:07.710 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:29.667 [Pipeline] } 00:01:29.683 [Pipeline] // retry 00:01:29.691 [Pipeline] sh 00:01:30.086 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:31.473 [Pipeline] sh 00:01:31.752 + git -C dpdk log --oneline -n5 00:01:31.752 eeb0605f11 version: 23.11.0 00:01:31.752 238778122a doc: update release notes for 23.11 00:01:31.752 46aa6b3cfc doc: fix description of RSS features 00:01:31.752 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:31.752 7e421ae345 devtools: support skipping forbid rule check 00:01:31.769 [Pipeline] writeFile 00:01:31.783 [Pipeline] sh 00:01:32.064 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:32.075 [Pipeline] sh 00:01:32.354 + cat autorun-spdk.conf 00:01:32.355 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.355 SPDK_TEST_NVMF=1 00:01:32.355 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.355 SPDK_TEST_USDT=1 00:01:32.355 SPDK_RUN_UBSAN=1 00:01:32.355 SPDK_TEST_NVMF_MDNS=1 00:01:32.355 NET_TYPE=virt 00:01:32.355 SPDK_JSONRPC_GO_CLIENT=1 00:01:32.355 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:32.355 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:32.355 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.361 RUN_NIGHTLY=1 00:01:32.362 [Pipeline] } 00:01:32.373 [Pipeline] // stage 00:01:32.385 [Pipeline] stage 00:01:32.387 [Pipeline] { (Run VM) 00:01:32.398 [Pipeline] sh 00:01:32.677 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:32.677 + echo 'Start stage prepare_nvme.sh' 00:01:32.677 Start stage prepare_nvme.sh 00:01:32.677 + [[ -n 6 ]] 00:01:32.677 + disk_prefix=ex6 00:01:32.677 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:32.677 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:32.677 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:32.677 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.677 ++ SPDK_TEST_NVMF=1 00:01:32.677 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.677 ++ SPDK_TEST_USDT=1 00:01:32.677 ++ SPDK_RUN_UBSAN=1 00:01:32.677 ++ SPDK_TEST_NVMF_MDNS=1 00:01:32.677 ++ NET_TYPE=virt 00:01:32.677 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:32.677 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:32.677 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:32.677 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.677 ++ RUN_NIGHTLY=1 00:01:32.677 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:32.677 + nvme_files=() 00:01:32.677 + declare -A nvme_files 00:01:32.677 + backend_dir=/var/lib/libvirt/images/backends 00:01:32.677 + nvme_files['nvme.img']=5G 00:01:32.677 + nvme_files['nvme-cmb.img']=5G 00:01:32.677 + nvme_files['nvme-multi0.img']=4G 00:01:32.677 + nvme_files['nvme-multi1.img']=4G 00:01:32.677 + nvme_files['nvme-multi2.img']=4G 00:01:32.677 + nvme_files['nvme-openstack.img']=8G 00:01:32.677 + nvme_files['nvme-zns.img']=5G 00:01:32.677 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:32.677 + (( SPDK_TEST_FTL == 1 )) 00:01:32.677 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:32.677 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:32.677 + for nvme in "${!nvme_files[@]}" 00:01:32.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:32.677 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.677 + for nvme in "${!nvme_files[@]}" 00:01:32.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:32.677 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.677 + for nvme in "${!nvme_files[@]}" 00:01:32.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:32.677 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:32.677 + for nvme in "${!nvme_files[@]}" 00:01:32.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:32.677 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.677 + for nvme in "${!nvme_files[@]}" 00:01:32.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:32.677 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.677 + for nvme in "${!nvme_files[@]}" 00:01:32.677 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:32.677 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.936 + for nvme in "${!nvme_files[@]}" 00:01:32.936 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:32.936 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.936 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:32.936 + echo 'End stage prepare_nvme.sh' 00:01:32.936 End stage prepare_nvme.sh 00:01:32.947 [Pipeline] sh 00:01:33.227 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:33.227 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:01:33.227 00:01:33.227 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:33.227 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:33.227 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:33.227 HELP=0 00:01:33.227 DRY_RUN=0 00:01:33.227 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:01:33.227 NVME_DISKS_TYPE=nvme,nvme, 00:01:33.227 NVME_AUTO_CREATE=0 00:01:33.227 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:01:33.227 NVME_CMB=,, 00:01:33.227 NVME_PMR=,, 00:01:33.227 NVME_ZNS=,, 00:01:33.227 NVME_MS=,, 00:01:33.227 NVME_FDP=,, 00:01:33.227 SPDK_VAGRANT_DISTRO=fedora39 00:01:33.227 SPDK_VAGRANT_VMCPU=10 00:01:33.227 SPDK_VAGRANT_VMRAM=12288 00:01:33.227 SPDK_VAGRANT_PROVIDER=libvirt 00:01:33.227 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:33.227 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:33.227 SPDK_OPENSTACK_NETWORK=0 00:01:33.227 VAGRANT_PACKAGE_BOX=0 00:01:33.227 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:33.227 FORCE_DISTRO=true 00:01:33.227 VAGRANT_BOX_VERSION= 00:01:33.227 EXTRA_VAGRANTFILES= 00:01:33.227 NIC_MODEL=e1000 00:01:33.227 00:01:33.227 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:33.227 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:35.758 Bringing machine 'default' up with 'libvirt' provider... 00:01:36.695 ==> default: Creating image (snapshot of base box volume). 00:01:36.695 ==> default: Creating domain with the following settings... 00:01:36.695 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734342514_f27a99f9336505f5015e 00:01:36.695 ==> default: -- Domain type: kvm 00:01:36.695 ==> default: -- Cpus: 10 00:01:36.695 ==> default: -- Feature: acpi 00:01:36.695 ==> default: -- Feature: apic 00:01:36.696 ==> default: -- Feature: pae 00:01:36.696 ==> default: -- Memory: 12288M 00:01:36.696 ==> default: -- Memory Backing: hugepages: 00:01:36.696 ==> default: -- Management MAC: 00:01:36.696 ==> default: -- Loader: 00:01:36.696 ==> default: -- Nvram: 00:01:36.696 ==> default: -- Base box: spdk/fedora39 00:01:36.696 ==> default: -- Storage pool: default 00:01:36.696 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734342514_f27a99f9336505f5015e.img (20G) 00:01:36.696 ==> default: -- Volume Cache: default 00:01:36.696 ==> default: -- Kernel: 00:01:36.696 ==> default: -- Initrd: 00:01:36.696 ==> default: -- Graphics Type: vnc 00:01:36.696 ==> default: -- Graphics Port: -1 00:01:36.696 ==> default: -- Graphics IP: 127.0.0.1 00:01:36.696 ==> default: -- Graphics Password: Not defined 00:01:36.696 ==> default: -- Video Type: cirrus 00:01:36.696 ==> default: -- Video VRAM: 9216 00:01:36.696 ==> default: -- Sound Type: 00:01:36.696 ==> default: -- Keymap: en-us 00:01:36.696 ==> default: -- TPM Path: 00:01:36.696 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:36.696 ==> default: -- Command line args: 00:01:36.696 ==> default: -> value=-device, 00:01:36.696 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:36.696 ==> default: -> value=-drive, 00:01:36.696 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:36.696 ==> default: -> value=-device, 00:01:36.696 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.696 ==> default: -> value=-device, 00:01:36.696 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:36.696 ==> default: -> value=-drive, 00:01:36.696 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:36.696 ==> default: -> value=-device, 00:01:36.696 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.696 ==> default: -> value=-drive, 00:01:36.696 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:36.696 ==> default: -> value=-device, 00:01:36.696 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.696 ==> default: -> value=-drive, 00:01:36.696 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:36.696 ==> default: -> value=-device, 00:01:36.696 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.954 ==> default: Creating shared folders metadata... 00:01:36.954 ==> default: Starting domain. 00:01:39.484 ==> default: Waiting for domain to get an IP address... 00:01:54.360 ==> default: Waiting for SSH to become available... 00:01:55.738 ==> default: Configuring and enabling network interfaces... 00:01:59.950 default: SSH address: 192.168.121.98:22 00:01:59.950 default: SSH username: vagrant 00:01:59.950 default: SSH auth method: private key 00:02:01.852 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:09.967 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:15.237 ==> default: Mounting SSHFS shared folder... 00:02:16.619 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:16.619 ==> default: Checking Mount.. 00:02:17.555 ==> default: Folder Successfully Mounted! 00:02:17.555 ==> default: Running provisioner: file... 00:02:18.491 default: ~/.gitconfig => .gitconfig 00:02:19.058 00:02:19.058 SUCCESS! 00:02:19.058 00:02:19.058 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:19.058 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:19.058 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:19.058 00:02:19.067 [Pipeline] } 00:02:19.082 [Pipeline] // stage 00:02:19.091 [Pipeline] dir 00:02:19.091 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:19.093 [Pipeline] { 00:02:19.105 [Pipeline] catchError 00:02:19.107 [Pipeline] { 00:02:19.120 [Pipeline] sh 00:02:19.400 + vagrant ssh-config --host vagrant 00:02:19.400 + sed -ne /^Host/,$p 00:02:19.400 + tee ssh_conf 00:02:22.690 Host vagrant 00:02:22.690 HostName 192.168.121.98 00:02:22.690 User vagrant 00:02:22.690 Port 22 00:02:22.690 UserKnownHostsFile /dev/null 00:02:22.690 StrictHostKeyChecking no 00:02:22.690 PasswordAuthentication no 00:02:22.690 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:22.690 IdentitiesOnly yes 00:02:22.690 LogLevel FATAL 00:02:22.690 ForwardAgent yes 00:02:22.690 ForwardX11 yes 00:02:22.690 00:02:22.704 [Pipeline] withEnv 00:02:22.706 [Pipeline] { 00:02:22.719 [Pipeline] sh 00:02:22.998 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:22.998 source /etc/os-release 00:02:22.998 [[ -e /image.version ]] && img=$(< /image.version) 00:02:22.998 # Minimal, systemd-like check. 00:02:22.998 if [[ -e /.dockerenv ]]; then 00:02:22.998 # Clear garbage from the node's name: 00:02:22.998 # agt-er_autotest_547-896 -> autotest_547-896 00:02:22.998 # $HOSTNAME is the actual container id 00:02:22.998 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:22.998 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:22.998 # We can assume this is a mount from a host where container is running, 00:02:22.998 # so fetch its hostname to easily identify the target swarm worker. 00:02:22.998 container="$(< /etc/hostname) ($agent)" 00:02:22.998 else 00:02:22.998 # Fallback 00:02:22.998 container=$agent 00:02:22.998 fi 00:02:22.998 fi 00:02:22.998 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:22.998 00:02:23.268 [Pipeline] } 00:02:23.284 [Pipeline] // withEnv 00:02:23.293 [Pipeline] setCustomBuildProperty 00:02:23.308 [Pipeline] stage 00:02:23.310 [Pipeline] { (Tests) 00:02:23.328 [Pipeline] sh 00:02:23.607 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:23.878 [Pipeline] sh 00:02:24.157 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:24.428 [Pipeline] timeout 00:02:24.428 Timeout set to expire in 1 hr 0 min 00:02:24.430 [Pipeline] { 00:02:24.442 [Pipeline] sh 00:02:24.720 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:25.288 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:25.321 [Pipeline] sh 00:02:25.605 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:25.877 [Pipeline] sh 00:02:26.156 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:26.430 [Pipeline] sh 00:02:26.709 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:26.967 ++ readlink -f spdk_repo 00:02:26.967 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:26.967 + [[ -n /home/vagrant/spdk_repo ]] 00:02:26.967 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:26.967 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:26.967 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:26.967 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:26.967 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:26.967 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:26.967 + cd /home/vagrant/spdk_repo 00:02:26.967 + source /etc/os-release 00:02:26.967 ++ NAME='Fedora Linux' 00:02:26.967 ++ VERSION='39 (Cloud Edition)' 00:02:26.967 ++ ID=fedora 00:02:26.967 ++ VERSION_ID=39 00:02:26.967 ++ VERSION_CODENAME= 00:02:26.967 ++ PLATFORM_ID=platform:f39 00:02:26.967 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:26.967 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:26.967 ++ LOGO=fedora-logo-icon 00:02:26.967 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:26.967 ++ HOME_URL=https://fedoraproject.org/ 00:02:26.967 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:26.967 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:26.967 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:26.967 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:26.967 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:26.967 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:26.967 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:26.967 ++ SUPPORT_END=2024-11-12 00:02:26.967 ++ VARIANT='Cloud Edition' 00:02:26.967 ++ VARIANT_ID=cloud 00:02:26.967 + uname -a 00:02:26.967 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:26.967 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:26.967 Hugepages 00:02:26.967 node hugesize free / total 00:02:26.967 node0 1048576kB 0 / 0 00:02:26.967 node0 2048kB 0 / 0 00:02:26.967 00:02:26.967 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:26.967 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:26.967 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:26.967 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:26.967 + rm -f /tmp/spdk-ld-path 00:02:26.967 + source autorun-spdk.conf 00:02:26.967 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.967 ++ SPDK_TEST_NVMF=1 00:02:26.967 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:26.967 ++ SPDK_TEST_USDT=1 00:02:26.967 ++ SPDK_RUN_UBSAN=1 00:02:26.967 ++ SPDK_TEST_NVMF_MDNS=1 00:02:26.967 ++ NET_TYPE=virt 00:02:26.967 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:26.967 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:26.967 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:26.967 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:26.967 ++ RUN_NIGHTLY=1 00:02:26.967 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:26.967 + [[ -n '' ]] 00:02:26.967 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:27.226 + for M in /var/spdk/build-*-manifest.txt 00:02:27.226 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:27.226 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:27.226 + for M in /var/spdk/build-*-manifest.txt 00:02:27.226 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:27.226 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:27.226 + for M in /var/spdk/build-*-manifest.txt 00:02:27.226 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:27.226 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:27.227 ++ uname 00:02:27.227 + [[ Linux == \L\i\n\u\x ]] 00:02:27.227 + sudo dmesg -T 00:02:27.227 + sudo dmesg --clear 00:02:27.227 + dmesg_pid=5975 00:02:27.227 + [[ Fedora Linux == FreeBSD ]] 00:02:27.227 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:27.227 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:27.227 + sudo dmesg -Tw 00:02:27.227 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:27.227 + [[ -x /usr/src/fio-static/fio ]] 00:02:27.227 + export FIO_BIN=/usr/src/fio-static/fio 00:02:27.227 + FIO_BIN=/usr/src/fio-static/fio 00:02:27.227 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:27.227 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:27.227 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:27.227 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:27.227 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:27.227 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:27.227 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:27.227 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:27.227 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:27.227 Test configuration: 00:02:27.227 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:27.227 SPDK_TEST_NVMF=1 00:02:27.227 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:27.227 SPDK_TEST_USDT=1 00:02:27.227 SPDK_RUN_UBSAN=1 00:02:27.227 SPDK_TEST_NVMF_MDNS=1 00:02:27.227 NET_TYPE=virt 00:02:27.227 SPDK_JSONRPC_GO_CLIENT=1 00:02:27.227 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:27.227 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:27.227 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:27.227 RUN_NIGHTLY=1 09:49:25 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:27.227 09:49:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:27.227 09:49:25 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:27.227 09:49:25 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:27.227 09:49:25 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:27.227 09:49:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.227 09:49:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.227 09:49:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.227 09:49:25 -- paths/export.sh@5 -- $ export PATH 00:02:27.227 09:49:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.227 09:49:25 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:27.227 09:49:25 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:27.227 09:49:25 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734342565.XXXXXX 00:02:27.227 09:49:25 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734342565.gcOCHY 00:02:27.227 09:49:25 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:27.227 09:49:25 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:27.227 09:49:25 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:27.227 09:49:25 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:27.227 09:49:25 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:27.227 09:49:25 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:27.227 09:49:25 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:27.227 09:49:25 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:27.227 09:49:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.227 09:49:25 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:27.227 09:49:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:27.227 09:49:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:27.227 09:49:25 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:27.227 09:49:25 -- spdk/autobuild.sh@16 -- $ date -u 00:02:27.227 Mon Dec 16 09:49:25 AM UTC 2024 00:02:27.227 09:49:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:27.227 LTS-67-gc13c99a5e 00:02:27.227 09:49:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:27.227 09:49:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:27.227 09:49:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:27.227 09:49:25 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:27.227 09:49:25 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:27.227 09:49:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.227 ************************************ 00:02:27.227 START TEST ubsan 00:02:27.227 ************************************ 00:02:27.227 using ubsan 00:02:27.227 09:49:25 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:27.227 00:02:27.227 real 0m0.000s 00:02:27.227 user 0m0.000s 00:02:27.227 sys 0m0.000s 00:02:27.227 09:49:25 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:27.227 09:49:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.227 ************************************ 00:02:27.227 END TEST ubsan 00:02:27.227 ************************************ 00:02:27.486 09:49:25 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:27.486 09:49:25 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:27.486 09:49:25 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:27.486 09:49:25 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:27.486 09:49:25 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:27.486 09:49:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.486 ************************************ 00:02:27.486 START TEST build_native_dpdk 00:02:27.486 ************************************ 00:02:27.486 09:49:25 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:27.486 09:49:25 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:27.486 09:49:25 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:27.486 09:49:25 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:27.486 09:49:25 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:27.486 09:49:25 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:27.486 09:49:25 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:27.486 09:49:25 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:27.486 09:49:25 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:27.486 09:49:25 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:27.486 09:49:25 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:27.486 09:49:25 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:27.486 09:49:25 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:27.486 09:49:25 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:27.486 09:49:25 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:27.486 09:49:25 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:27.486 09:49:25 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:27.486 09:49:25 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:27.486 09:49:25 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:27.486 09:49:25 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:27.486 09:49:25 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:27.486 eeb0605f11 version: 23.11.0 00:02:27.486 238778122a doc: update release notes for 23.11 00:02:27.486 46aa6b3cfc doc: fix description of RSS features 00:02:27.486 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:27.486 7e421ae345 devtools: support skipping forbid rule check 00:02:27.486 09:49:25 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:27.486 09:49:25 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:27.486 09:49:25 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:27.486 09:49:25 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:27.486 09:49:25 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:27.486 09:49:25 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:27.486 09:49:25 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:27.486 09:49:25 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:27.486 09:49:25 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:27.486 09:49:25 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:27.486 09:49:25 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:27.486 09:49:25 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:27.486 09:49:25 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:27.486 09:49:25 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:27.486 09:49:25 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:27.486 09:49:25 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:27.486 09:49:25 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:27.486 09:49:25 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:27.486 09:49:25 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:27.486 09:49:25 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:27.486 09:49:25 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:27.486 09:49:25 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:27.486 09:49:25 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:27.486 09:49:25 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:27.486 09:49:25 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:27.486 09:49:25 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:27.486 09:49:25 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:27.486 09:49:25 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:27.486 09:49:25 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:27.486 09:49:25 -- scripts/common.sh@343 -- $ case "$op" in 00:02:27.486 09:49:25 -- scripts/common.sh@344 -- $ : 1 00:02:27.486 09:49:25 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:27.486 09:49:25 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.486 09:49:25 -- scripts/common.sh@364 -- $ decimal 23 00:02:27.486 09:49:25 -- scripts/common.sh@352 -- $ local d=23 00:02:27.486 09:49:25 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:27.486 09:49:25 -- scripts/common.sh@354 -- $ echo 23 00:02:27.486 09:49:25 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:27.486 09:49:25 -- scripts/common.sh@365 -- $ decimal 21 00:02:27.486 09:49:25 -- scripts/common.sh@352 -- $ local d=21 00:02:27.486 09:49:25 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:27.486 09:49:25 -- scripts/common.sh@354 -- $ echo 21 00:02:27.486 09:49:25 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:27.486 09:49:25 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:27.486 09:49:25 -- scripts/common.sh@366 -- $ return 1 00:02:27.486 09:49:25 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:27.486 patching file config/rte_config.h 00:02:27.486 Hunk #1 succeeded at 60 (offset 1 line). 00:02:27.486 09:49:25 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:27.486 09:49:25 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:27.486 09:49:25 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:27.486 09:49:25 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:27.486 09:49:25 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:27.486 09:49:25 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:27.486 09:49:25 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:27.486 09:49:25 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:27.486 09:49:25 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:27.486 09:49:25 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:27.486 09:49:25 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:27.486 09:49:25 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:27.486 09:49:25 -- scripts/common.sh@343 -- $ case "$op" in 00:02:27.486 09:49:25 -- scripts/common.sh@344 -- $ : 1 00:02:27.486 09:49:25 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:27.486 09:49:25 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.486 09:49:25 -- scripts/common.sh@364 -- $ decimal 23 00:02:27.486 09:49:25 -- scripts/common.sh@352 -- $ local d=23 00:02:27.486 09:49:25 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:27.486 09:49:25 -- scripts/common.sh@354 -- $ echo 23 00:02:27.486 09:49:25 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:27.486 09:49:25 -- scripts/common.sh@365 -- $ decimal 24 00:02:27.486 09:49:25 -- scripts/common.sh@352 -- $ local d=24 00:02:27.486 09:49:25 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:27.486 09:49:25 -- scripts/common.sh@354 -- $ echo 24 00:02:27.486 09:49:25 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:27.486 09:49:25 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:27.486 09:49:25 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:27.486 09:49:25 -- scripts/common.sh@367 -- $ return 0 00:02:27.486 09:49:25 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:27.486 patching file lib/pcapng/rte_pcapng.c 00:02:27.486 09:49:25 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:27.486 09:49:25 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:27.487 09:49:25 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:27.487 09:49:25 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:27.487 09:49:25 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:32.785 The Meson build system 00:02:32.785 Version: 1.5.0 00:02:32.785 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:32.785 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:32.785 Build type: native build 00:02:32.786 Program cat found: YES (/usr/bin/cat) 00:02:32.786 Project name: DPDK 00:02:32.786 Project version: 23.11.0 00:02:32.786 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:32.786 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:32.786 Host machine cpu family: x86_64 00:02:32.786 Host machine cpu: x86_64 00:02:32.786 Message: ## Building in Developer Mode ## 00:02:32.786 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:32.786 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:32.786 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:32.786 Program python3 found: YES (/usr/bin/python3) 00:02:32.786 Program cat found: YES (/usr/bin/cat) 00:02:32.786 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:32.786 Compiler for C supports arguments -march=native: YES 00:02:32.786 Checking for size of "void *" : 8 00:02:32.786 Checking for size of "void *" : 8 (cached) 00:02:32.786 Library m found: YES 00:02:32.786 Library numa found: YES 00:02:32.786 Has header "numaif.h" : YES 00:02:32.786 Library fdt found: NO 00:02:32.786 Library execinfo found: NO 00:02:32.786 Has header "execinfo.h" : YES 00:02:32.786 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:32.786 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:32.786 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:32.786 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:32.786 Run-time dependency openssl found: YES 3.1.1 00:02:32.786 Run-time dependency libpcap found: YES 1.10.4 00:02:32.786 Has header "pcap.h" with dependency libpcap: YES 00:02:32.786 Compiler for C supports arguments -Wcast-qual: YES 00:02:32.786 Compiler for C supports arguments -Wdeprecated: YES 00:02:32.786 Compiler for C supports arguments -Wformat: YES 00:02:32.786 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:32.786 Compiler for C supports arguments -Wformat-security: NO 00:02:32.786 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:32.786 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:32.786 Compiler for C supports arguments -Wnested-externs: YES 00:02:32.786 Compiler for C supports arguments -Wold-style-definition: YES 00:02:32.786 Compiler for C supports arguments -Wpointer-arith: YES 00:02:32.786 Compiler for C supports arguments -Wsign-compare: YES 00:02:32.786 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:32.786 Compiler for C supports arguments -Wundef: YES 00:02:32.786 Compiler for C supports arguments -Wwrite-strings: YES 00:02:32.786 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:32.786 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:32.786 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:32.786 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:32.786 Program objdump found: YES (/usr/bin/objdump) 00:02:32.786 Compiler for C supports arguments -mavx512f: YES 00:02:32.786 Checking if "AVX512 checking" compiles: YES 00:02:32.786 Fetching value of define "__SSE4_2__" : 1 00:02:32.786 Fetching value of define "__AES__" : 1 00:02:32.786 Fetching value of define "__AVX__" : 1 00:02:32.786 Fetching value of define "__AVX2__" : 1 00:02:32.786 Fetching value of define "__AVX512BW__" : (undefined) 00:02:32.786 Fetching value of define "__AVX512CD__" : (undefined) 00:02:32.786 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:32.786 Fetching value of define "__AVX512F__" : (undefined) 00:02:32.786 Fetching value of define "__AVX512VL__" : (undefined) 00:02:32.786 Fetching value of define "__PCLMUL__" : 1 00:02:32.786 Fetching value of define "__RDRND__" : 1 00:02:32.786 Fetching value of define "__RDSEED__" : 1 00:02:32.786 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:32.786 Fetching value of define "__znver1__" : (undefined) 00:02:32.786 Fetching value of define "__znver2__" : (undefined) 00:02:32.786 Fetching value of define "__znver3__" : (undefined) 00:02:32.786 Fetching value of define "__znver4__" : (undefined) 00:02:32.786 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:32.786 Message: lib/log: Defining dependency "log" 00:02:32.786 Message: lib/kvargs: Defining dependency "kvargs" 00:02:32.786 Message: lib/telemetry: Defining dependency "telemetry" 00:02:32.786 Checking for function "getentropy" : NO 00:02:32.786 Message: lib/eal: Defining dependency "eal" 00:02:32.786 Message: lib/ring: Defining dependency "ring" 00:02:32.786 Message: lib/rcu: Defining dependency "rcu" 00:02:32.786 Message: lib/mempool: Defining dependency "mempool" 00:02:32.786 Message: lib/mbuf: Defining dependency "mbuf" 00:02:32.786 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:32.786 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:32.786 Compiler for C supports arguments -mpclmul: YES 00:02:32.786 Compiler for C supports arguments -maes: YES 00:02:32.786 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:32.786 Compiler for C supports arguments -mavx512bw: YES 00:02:32.786 Compiler for C supports arguments -mavx512dq: YES 00:02:32.786 Compiler for C supports arguments -mavx512vl: YES 00:02:32.786 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:32.786 Compiler for C supports arguments -mavx2: YES 00:02:32.786 Compiler for C supports arguments -mavx: YES 00:02:32.786 Message: lib/net: Defining dependency "net" 00:02:32.786 Message: lib/meter: Defining dependency "meter" 00:02:32.786 Message: lib/ethdev: Defining dependency "ethdev" 00:02:32.786 Message: lib/pci: Defining dependency "pci" 00:02:32.786 Message: lib/cmdline: Defining dependency "cmdline" 00:02:32.786 Message: lib/metrics: Defining dependency "metrics" 00:02:32.786 Message: lib/hash: Defining dependency "hash" 00:02:32.786 Message: lib/timer: Defining dependency "timer" 00:02:32.786 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:32.786 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:32.786 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:32.786 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:32.786 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:32.786 Message: lib/acl: Defining dependency "acl" 00:02:32.786 Message: lib/bbdev: Defining dependency "bbdev" 00:02:32.786 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:32.786 Run-time dependency libelf found: YES 0.191 00:02:32.786 Message: lib/bpf: Defining dependency "bpf" 00:02:32.786 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:32.786 Message: lib/compressdev: Defining dependency "compressdev" 00:02:32.786 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:32.786 Message: lib/distributor: Defining dependency "distributor" 00:02:32.786 Message: lib/dmadev: Defining dependency "dmadev" 00:02:32.786 Message: lib/efd: Defining dependency "efd" 00:02:32.786 Message: lib/eventdev: Defining dependency "eventdev" 00:02:32.786 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:32.786 Message: lib/gpudev: Defining dependency "gpudev" 00:02:32.786 Message: lib/gro: Defining dependency "gro" 00:02:32.786 Message: lib/gso: Defining dependency "gso" 00:02:32.786 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:32.786 Message: lib/jobstats: Defining dependency "jobstats" 00:02:32.786 Message: lib/latencystats: Defining dependency "latencystats" 00:02:32.786 Message: lib/lpm: Defining dependency "lpm" 00:02:32.786 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:32.786 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:32.786 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:32.786 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:32.786 Message: lib/member: Defining dependency "member" 00:02:32.786 Message: lib/pcapng: Defining dependency "pcapng" 00:02:32.786 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:32.786 Message: lib/power: Defining dependency "power" 00:02:32.786 Message: lib/rawdev: Defining dependency "rawdev" 00:02:32.786 Message: lib/regexdev: Defining dependency "regexdev" 00:02:32.786 Message: lib/mldev: Defining dependency "mldev" 00:02:32.786 Message: lib/rib: Defining dependency "rib" 00:02:32.786 Message: lib/reorder: Defining dependency "reorder" 00:02:32.786 Message: lib/sched: Defining dependency "sched" 00:02:32.786 Message: lib/security: Defining dependency "security" 00:02:32.786 Message: lib/stack: Defining dependency "stack" 00:02:32.786 Has header "linux/userfaultfd.h" : YES 00:02:32.786 Has header "linux/vduse.h" : YES 00:02:32.786 Message: lib/vhost: Defining dependency "vhost" 00:02:32.786 Message: lib/ipsec: Defining dependency "ipsec" 00:02:32.786 Message: lib/pdcp: Defining dependency "pdcp" 00:02:32.786 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:32.786 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:32.786 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:32.786 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:32.786 Message: lib/fib: Defining dependency "fib" 00:02:32.786 Message: lib/port: Defining dependency "port" 00:02:32.786 Message: lib/pdump: Defining dependency "pdump" 00:02:32.786 Message: lib/table: Defining dependency "table" 00:02:32.786 Message: lib/pipeline: Defining dependency "pipeline" 00:02:32.786 Message: lib/graph: Defining dependency "graph" 00:02:32.786 Message: lib/node: Defining dependency "node" 00:02:32.786 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:34.689 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.689 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.689 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.689 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:34.689 Compiler for C supports arguments -Wno-unused-value: YES 00:02:34.689 Compiler for C supports arguments -Wno-format: YES 00:02:34.689 Compiler for C supports arguments -Wno-format-security: YES 00:02:34.689 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:34.689 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:34.689 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:34.689 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:34.689 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.689 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.689 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:34.689 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:34.689 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:34.689 Has header "sys/epoll.h" : YES 00:02:34.689 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:34.689 Configuring doxy-api-html.conf using configuration 00:02:34.689 Configuring doxy-api-man.conf using configuration 00:02:34.689 Program mandb found: YES (/usr/bin/mandb) 00:02:34.689 Program sphinx-build found: NO 00:02:34.689 Configuring rte_build_config.h using configuration 00:02:34.689 Message: 00:02:34.689 ================= 00:02:34.689 Applications Enabled 00:02:34.689 ================= 00:02:34.689 00:02:34.689 apps: 00:02:34.689 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:34.689 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:34.689 test-pmd, test-regex, test-sad, test-security-perf, 00:02:34.689 00:02:34.689 Message: 00:02:34.689 ================= 00:02:34.689 Libraries Enabled 00:02:34.689 ================= 00:02:34.689 00:02:34.689 libs: 00:02:34.689 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:34.689 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:34.689 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:34.689 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:34.690 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:34.690 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:34.690 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:34.690 00:02:34.690 00:02:34.690 Message: 00:02:34.690 =============== 00:02:34.690 Drivers Enabled 00:02:34.690 =============== 00:02:34.690 00:02:34.690 common: 00:02:34.690 00:02:34.690 bus: 00:02:34.690 pci, vdev, 00:02:34.690 mempool: 00:02:34.690 ring, 00:02:34.690 dma: 00:02:34.690 00:02:34.690 net: 00:02:34.690 i40e, 00:02:34.690 raw: 00:02:34.690 00:02:34.690 crypto: 00:02:34.690 00:02:34.690 compress: 00:02:34.690 00:02:34.690 regex: 00:02:34.690 00:02:34.690 ml: 00:02:34.690 00:02:34.690 vdpa: 00:02:34.690 00:02:34.690 event: 00:02:34.690 00:02:34.690 baseband: 00:02:34.690 00:02:34.690 gpu: 00:02:34.690 00:02:34.690 00:02:34.690 Message: 00:02:34.690 ================= 00:02:34.690 Content Skipped 00:02:34.690 ================= 00:02:34.690 00:02:34.690 apps: 00:02:34.690 00:02:34.690 libs: 00:02:34.690 00:02:34.690 drivers: 00:02:34.690 common/cpt: not in enabled drivers build config 00:02:34.690 common/dpaax: not in enabled drivers build config 00:02:34.690 common/iavf: not in enabled drivers build config 00:02:34.690 common/idpf: not in enabled drivers build config 00:02:34.690 common/mvep: not in enabled drivers build config 00:02:34.690 common/octeontx: not in enabled drivers build config 00:02:34.690 bus/auxiliary: not in enabled drivers build config 00:02:34.690 bus/cdx: not in enabled drivers build config 00:02:34.690 bus/dpaa: not in enabled drivers build config 00:02:34.690 bus/fslmc: not in enabled drivers build config 00:02:34.690 bus/ifpga: not in enabled drivers build config 00:02:34.690 bus/platform: not in enabled drivers build config 00:02:34.690 bus/vmbus: not in enabled drivers build config 00:02:34.690 common/cnxk: not in enabled drivers build config 00:02:34.690 common/mlx5: not in enabled drivers build config 00:02:34.690 common/nfp: not in enabled drivers build config 00:02:34.690 common/qat: not in enabled drivers build config 00:02:34.690 common/sfc_efx: not in enabled drivers build config 00:02:34.690 mempool/bucket: not in enabled drivers build config 00:02:34.690 mempool/cnxk: not in enabled drivers build config 00:02:34.690 mempool/dpaa: not in enabled drivers build config 00:02:34.690 mempool/dpaa2: not in enabled drivers build config 00:02:34.690 mempool/octeontx: not in enabled drivers build config 00:02:34.690 mempool/stack: not in enabled drivers build config 00:02:34.690 dma/cnxk: not in enabled drivers build config 00:02:34.690 dma/dpaa: not in enabled drivers build config 00:02:34.690 dma/dpaa2: not in enabled drivers build config 00:02:34.690 dma/hisilicon: not in enabled drivers build config 00:02:34.690 dma/idxd: not in enabled drivers build config 00:02:34.690 dma/ioat: not in enabled drivers build config 00:02:34.690 dma/skeleton: not in enabled drivers build config 00:02:34.690 net/af_packet: not in enabled drivers build config 00:02:34.690 net/af_xdp: not in enabled drivers build config 00:02:34.690 net/ark: not in enabled drivers build config 00:02:34.690 net/atlantic: not in enabled drivers build config 00:02:34.690 net/avp: not in enabled drivers build config 00:02:34.690 net/axgbe: not in enabled drivers build config 00:02:34.690 net/bnx2x: not in enabled drivers build config 00:02:34.690 net/bnxt: not in enabled drivers build config 00:02:34.690 net/bonding: not in enabled drivers build config 00:02:34.690 net/cnxk: not in enabled drivers build config 00:02:34.690 net/cpfl: not in enabled drivers build config 00:02:34.690 net/cxgbe: not in enabled drivers build config 00:02:34.690 net/dpaa: not in enabled drivers build config 00:02:34.690 net/dpaa2: not in enabled drivers build config 00:02:34.690 net/e1000: not in enabled drivers build config 00:02:34.690 net/ena: not in enabled drivers build config 00:02:34.690 net/enetc: not in enabled drivers build config 00:02:34.690 net/enetfec: not in enabled drivers build config 00:02:34.690 net/enic: not in enabled drivers build config 00:02:34.690 net/failsafe: not in enabled drivers build config 00:02:34.690 net/fm10k: not in enabled drivers build config 00:02:34.690 net/gve: not in enabled drivers build config 00:02:34.690 net/hinic: not in enabled drivers build config 00:02:34.690 net/hns3: not in enabled drivers build config 00:02:34.690 net/iavf: not in enabled drivers build config 00:02:34.690 net/ice: not in enabled drivers build config 00:02:34.690 net/idpf: not in enabled drivers build config 00:02:34.690 net/igc: not in enabled drivers build config 00:02:34.690 net/ionic: not in enabled drivers build config 00:02:34.690 net/ipn3ke: not in enabled drivers build config 00:02:34.690 net/ixgbe: not in enabled drivers build config 00:02:34.690 net/mana: not in enabled drivers build config 00:02:34.690 net/memif: not in enabled drivers build config 00:02:34.690 net/mlx4: not in enabled drivers build config 00:02:34.690 net/mlx5: not in enabled drivers build config 00:02:34.690 net/mvneta: not in enabled drivers build config 00:02:34.690 net/mvpp2: not in enabled drivers build config 00:02:34.690 net/netvsc: not in enabled drivers build config 00:02:34.690 net/nfb: not in enabled drivers build config 00:02:34.690 net/nfp: not in enabled drivers build config 00:02:34.690 net/ngbe: not in enabled drivers build config 00:02:34.690 net/null: not in enabled drivers build config 00:02:34.690 net/octeontx: not in enabled drivers build config 00:02:34.690 net/octeon_ep: not in enabled drivers build config 00:02:34.690 net/pcap: not in enabled drivers build config 00:02:34.690 net/pfe: not in enabled drivers build config 00:02:34.690 net/qede: not in enabled drivers build config 00:02:34.690 net/ring: not in enabled drivers build config 00:02:34.690 net/sfc: not in enabled drivers build config 00:02:34.690 net/softnic: not in enabled drivers build config 00:02:34.690 net/tap: not in enabled drivers build config 00:02:34.690 net/thunderx: not in enabled drivers build config 00:02:34.690 net/txgbe: not in enabled drivers build config 00:02:34.690 net/vdev_netvsc: not in enabled drivers build config 00:02:34.690 net/vhost: not in enabled drivers build config 00:02:34.690 net/virtio: not in enabled drivers build config 00:02:34.690 net/vmxnet3: not in enabled drivers build config 00:02:34.690 raw/cnxk_bphy: not in enabled drivers build config 00:02:34.690 raw/cnxk_gpio: not in enabled drivers build config 00:02:34.690 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:34.690 raw/ifpga: not in enabled drivers build config 00:02:34.690 raw/ntb: not in enabled drivers build config 00:02:34.690 raw/skeleton: not in enabled drivers build config 00:02:34.690 crypto/armv8: not in enabled drivers build config 00:02:34.690 crypto/bcmfs: not in enabled drivers build config 00:02:34.690 crypto/caam_jr: not in enabled drivers build config 00:02:34.690 crypto/ccp: not in enabled drivers build config 00:02:34.690 crypto/cnxk: not in enabled drivers build config 00:02:34.690 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.690 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.690 crypto/ipsec_mb: not in enabled drivers build config 00:02:34.690 crypto/mlx5: not in enabled drivers build config 00:02:34.690 crypto/mvsam: not in enabled drivers build config 00:02:34.690 crypto/nitrox: not in enabled drivers build config 00:02:34.690 crypto/null: not in enabled drivers build config 00:02:34.690 crypto/octeontx: not in enabled drivers build config 00:02:34.690 crypto/openssl: not in enabled drivers build config 00:02:34.690 crypto/scheduler: not in enabled drivers build config 00:02:34.690 crypto/uadk: not in enabled drivers build config 00:02:34.690 crypto/virtio: not in enabled drivers build config 00:02:34.690 compress/isal: not in enabled drivers build config 00:02:34.690 compress/mlx5: not in enabled drivers build config 00:02:34.690 compress/octeontx: not in enabled drivers build config 00:02:34.690 compress/zlib: not in enabled drivers build config 00:02:34.690 regex/mlx5: not in enabled drivers build config 00:02:34.690 regex/cn9k: not in enabled drivers build config 00:02:34.690 ml/cnxk: not in enabled drivers build config 00:02:34.690 vdpa/ifc: not in enabled drivers build config 00:02:34.690 vdpa/mlx5: not in enabled drivers build config 00:02:34.690 vdpa/nfp: not in enabled drivers build config 00:02:34.690 vdpa/sfc: not in enabled drivers build config 00:02:34.690 event/cnxk: not in enabled drivers build config 00:02:34.690 event/dlb2: not in enabled drivers build config 00:02:34.690 event/dpaa: not in enabled drivers build config 00:02:34.690 event/dpaa2: not in enabled drivers build config 00:02:34.690 event/dsw: not in enabled drivers build config 00:02:34.690 event/opdl: not in enabled drivers build config 00:02:34.690 event/skeleton: not in enabled drivers build config 00:02:34.690 event/sw: not in enabled drivers build config 00:02:34.690 event/octeontx: not in enabled drivers build config 00:02:34.690 baseband/acc: not in enabled drivers build config 00:02:34.690 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:34.690 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:34.690 baseband/la12xx: not in enabled drivers build config 00:02:34.690 baseband/null: not in enabled drivers build config 00:02:34.690 baseband/turbo_sw: not in enabled drivers build config 00:02:34.690 gpu/cuda: not in enabled drivers build config 00:02:34.690 00:02:34.690 00:02:34.690 Build targets in project: 220 00:02:34.690 00:02:34.690 DPDK 23.11.0 00:02:34.690 00:02:34.690 User defined options 00:02:34.690 libdir : lib 00:02:34.690 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:34.690 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:34.690 c_link_args : 00:02:34.690 enable_docs : false 00:02:34.690 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:34.690 enable_kmods : false 00:02:34.690 machine : native 00:02:34.690 tests : false 00:02:34.690 00:02:34.690 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.690 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:34.690 09:49:32 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:34.690 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:34.690 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:34.690 [2/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:34.690 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:34.691 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.691 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:34.691 [6/710] Linking static target lib/librte_kvargs.a 00:02:34.691 [7/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:34.691 [8/710] Linking static target lib/librte_log.a 00:02:34.691 [9/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:34.949 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:34.949 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.208 [12/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.208 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:35.208 [14/710] Linking target lib/librte_log.so.24.0 00:02:35.208 [15/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:35.208 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:35.208 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.466 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.466 [19/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:35.466 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.466 [21/710] Linking target lib/librte_kvargs.so.24.0 00:02:35.725 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.725 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.725 [24/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:35.725 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:35.725 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.984 [27/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:35.984 [28/710] Linking static target lib/librte_telemetry.a 00:02:35.984 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:35.984 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:35.984 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:35.984 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:36.242 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:36.242 [34/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.242 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:36.242 [36/710] Linking target lib/librte_telemetry.so.24.0 00:02:36.242 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:36.242 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:36.242 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:36.242 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:36.500 [41/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:36.500 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:36.500 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:36.500 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:36.757 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:36.757 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:37.014 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:37.014 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:37.014 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:37.014 [50/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:37.014 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:37.014 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:37.014 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:37.014 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:37.272 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:37.272 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:37.272 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:37.531 [58/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:37.531 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:37.531 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:37.531 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:37.531 [62/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:37.531 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:37.531 [64/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:37.790 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:37.790 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:37.790 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:37.790 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:38.048 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:38.048 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:38.048 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:38.048 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:38.306 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:38.306 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:38.306 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:38.306 [76/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:38.306 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:38.306 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:38.564 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:38.564 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:38.822 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:38.822 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:38.822 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:38.822 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:38.822 [85/710] Linking static target lib/librte_ring.a 00:02:39.080 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:39.080 [87/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:39.080 [88/710] Linking static target lib/librte_eal.a 00:02:39.080 [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.080 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:39.339 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:39.339 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:39.339 [93/710] Linking static target lib/librte_mempool.a 00:02:39.339 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:39.339 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:39.597 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:39.597 [97/710] Linking static target lib/librte_rcu.a 00:02:39.597 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:39.597 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:39.855 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:39.855 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.855 [102/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.855 [103/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:39.855 [104/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:40.113 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:40.113 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:40.113 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:40.113 [108/710] Linking static target lib/librte_mbuf.a 00:02:40.372 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:40.372 [110/710] Linking static target lib/librte_net.a 00:02:40.372 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:40.372 [112/710] Linking static target lib/librte_meter.a 00:02:40.630 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:40.630 [114/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.630 [115/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.630 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:40.630 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:40.630 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:40.630 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.565 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:41.565 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:41.565 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:41.565 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:41.824 [124/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:41.824 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:41.824 [126/710] Linking static target lib/librte_pci.a 00:02:41.824 [127/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:41.824 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:41.824 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:41.824 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.082 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:42.082 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:42.082 [133/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:42.082 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:42.082 [135/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:42.082 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:42.082 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:42.082 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:42.340 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:42.340 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:42.340 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:42.340 [142/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:42.598 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:42.598 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:42.598 [145/710] Linking static target lib/librte_cmdline.a 00:02:42.856 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:42.856 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:42.856 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:42.856 [149/710] Linking static target lib/librte_metrics.a 00:02:43.114 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:43.114 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.372 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.372 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:43.372 [154/710] Linking static target lib/librte_timer.a 00:02:43.372 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:43.938 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.197 [157/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:44.197 [158/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:44.197 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:44.197 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:44.763 [161/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:44.763 [162/710] Linking static target lib/librte_bitratestats.a 00:02:44.763 [163/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:44.763 [164/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:45.021 [165/710] Linking static target lib/librte_ethdev.a 00:02:45.021 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:45.021 [167/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.021 [168/710] Linking target lib/librte_eal.so.24.0 00:02:45.021 [169/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:45.021 [170/710] Linking static target lib/librte_bbdev.a 00:02:45.021 [171/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.279 [172/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:45.280 [173/710] Linking target lib/librte_ring.so.24.0 00:02:45.280 [174/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:45.280 [175/710] Linking target lib/librte_meter.so.24.0 00:02:45.280 [176/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:45.280 [177/710] Linking target lib/librte_rcu.so.24.0 00:02:45.280 [178/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:45.538 [179/710] Linking target lib/librte_mempool.so.24.0 00:02:45.538 [180/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:45.538 [181/710] Linking target lib/librte_pci.so.24.0 00:02:45.538 [182/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:45.538 [183/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:45.538 [184/710] Linking static target lib/librte_hash.a 00:02:45.538 [185/710] Linking target lib/librte_timer.so.24.0 00:02:45.538 [186/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:45.538 [187/710] Linking target lib/librte_mbuf.so.24.0 00:02:45.538 [188/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:45.538 [189/710] Linking static target lib/acl/libavx2_tmp.a 00:02:45.538 [190/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.797 [191/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:45.797 [192/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:45.797 [193/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:45.797 [194/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:45.797 [195/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:45.797 [196/710] Linking static target lib/acl/libavx512_tmp.a 00:02:45.797 [197/710] Linking target lib/librte_bbdev.so.24.0 00:02:45.797 [198/710] Linking target lib/librte_net.so.24.0 00:02:45.797 [199/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:46.055 [200/710] Linking target lib/librte_cmdline.so.24.0 00:02:46.055 [201/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.055 [202/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:46.055 [203/710] Linking static target lib/librte_acl.a 00:02:46.055 [204/710] Linking target lib/librte_hash.so.24.0 00:02:46.314 [205/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:46.314 [206/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:46.314 [207/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:46.314 [208/710] Linking static target lib/librte_cfgfile.a 00:02:46.314 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.314 [210/710] Linking target lib/librte_acl.so.24.0 00:02:46.575 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:46.575 [212/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:46.575 [213/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:46.575 [214/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:46.575 [215/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.575 [216/710] Linking target lib/librte_cfgfile.so.24.0 00:02:46.841 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:46.841 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:47.099 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:47.099 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:47.099 [221/710] Linking static target lib/librte_bpf.a 00:02:47.099 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:47.357 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.357 [224/710] Linking static target lib/librte_compressdev.a 00:02:47.357 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.357 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:47.357 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:47.616 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:47.616 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:47.616 [230/710] Linking static target lib/librte_distributor.a 00:02:47.616 [231/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.874 [232/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:47.874 [233/710] Linking target lib/librte_compressdev.so.24.0 00:02:47.874 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.874 [235/710] Linking target lib/librte_distributor.so.24.0 00:02:48.133 [236/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:48.133 [237/710] Linking static target lib/librte_dmadev.a 00:02:48.133 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:48.391 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.391 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:48.391 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:48.391 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:48.650 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:48.909 [244/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:48.909 [245/710] Linking static target lib/librte_efd.a 00:02:48.909 [246/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:49.167 [247/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.167 [248/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.167 [249/710] Linking static target lib/librte_cryptodev.a 00:02:49.167 [250/710] Linking target lib/librte_efd.so.24.0 00:02:49.167 [251/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:49.426 [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.426 [253/710] Linking target lib/librte_ethdev.so.24.0 00:02:49.426 [254/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:49.685 [255/710] Linking static target lib/librte_dispatcher.a 00:02:49.685 [256/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:49.685 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:49.685 [258/710] Linking target lib/librte_metrics.so.24.0 00:02:49.943 [259/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:49.943 [260/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:49.943 [261/710] Linking target lib/librte_bitratestats.so.24.0 00:02:49.943 [262/710] Linking static target lib/librte_gpudev.a 00:02:49.943 [263/710] Linking target lib/librte_bpf.so.24.0 00:02:49.943 [264/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:49.943 [265/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:49.944 [266/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:49.944 [267/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:49.944 [268/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.510 [269/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.510 [270/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:50.510 [271/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:50.510 [272/710] Linking target lib/librte_cryptodev.so.24.0 00:02:50.510 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:50.510 [274/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:50.510 [275/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.769 [276/710] Linking target lib/librte_gpudev.so.24.0 00:02:50.769 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:50.769 [278/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:50.769 [279/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:50.769 [280/710] Linking static target lib/librte_gro.a 00:02:50.769 [281/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:50.769 [282/710] Linking static target lib/librte_eventdev.a 00:02:51.062 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:51.062 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:51.062 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:51.062 [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.062 [287/710] Linking target lib/librte_gro.so.24.0 00:02:51.062 [288/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:51.337 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:51.337 [290/710] Linking static target lib/librte_gso.a 00:02:51.337 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:51.337 [292/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.596 [293/710] Linking target lib/librte_gso.so.24.0 00:02:51.596 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:51.596 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:51.596 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:51.596 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:51.596 [298/710] Linking static target lib/librte_jobstats.a 00:02:51.855 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:51.855 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:51.855 [301/710] Linking static target lib/librte_ip_frag.a 00:02:51.855 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:51.855 [303/710] Linking static target lib/librte_latencystats.a 00:02:52.114 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.114 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:52.114 [306/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.114 [307/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.114 [308/710] Linking target lib/librte_ip_frag.so.24.0 00:02:52.114 [309/710] Linking target lib/librte_latencystats.so.24.0 00:02:52.373 [310/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:52.373 [311/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:52.373 [312/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:52.373 [313/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:52.373 [314/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:52.373 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:52.373 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:52.373 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.938 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.938 [319/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:52.938 [320/710] Linking static target lib/librte_lpm.a 00:02:52.938 [321/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:52.938 [322/710] Linking target lib/librte_eventdev.so.24.0 00:02:52.938 [323/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:52.938 [324/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.938 [325/710] Linking target lib/librte_dispatcher.so.24.0 00:02:52.938 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:53.197 [327/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:53.197 [328/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:53.197 [329/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:53.197 [330/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.197 [331/710] Linking static target lib/librte_pcapng.a 00:02:53.197 [332/710] Linking target lib/librte_lpm.so.24.0 00:02:53.197 [333/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:53.455 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:53.455 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.455 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:53.455 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:53.714 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.714 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.714 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.714 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:53.714 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.972 [343/710] Linking static target lib/librte_power.a 00:02:53.972 [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:53.972 [345/710] Linking static target lib/librte_regexdev.a 00:02:53.972 [346/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:53.972 [347/710] Linking static target lib/librte_rawdev.a 00:02:53.972 [348/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:53.972 [349/710] Linking static target lib/librte_member.a 00:02:54.231 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:54.231 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:54.231 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:54.231 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.489 [354/710] Linking target lib/librte_member.so.24.0 00:02:54.489 [355/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:54.489 [356/710] Linking static target lib/librte_mldev.a 00:02:54.489 [357/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.489 [358/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.489 [359/710] Linking target lib/librte_power.so.24.0 00:02:54.489 [360/710] Linking target lib/librte_rawdev.so.24.0 00:02:54.489 [361/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:54.489 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:54.747 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.747 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:54.747 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:55.006 [366/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:55.006 [367/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:55.006 [368/710] Linking static target lib/librte_rib.a 00:02:55.006 [369/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:55.006 [370/710] Linking static target lib/librte_reorder.a 00:02:55.006 [371/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:55.006 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:55.006 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:55.264 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:55.264 [375/710] Linking static target lib/librte_stack.a 00:02:55.264 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.264 [377/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:55.264 [378/710] Linking static target lib/librte_security.a 00:02:55.264 [379/710] Linking target lib/librte_reorder.so.24.0 00:02:55.522 [380/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.522 [381/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.522 [382/710] Linking target lib/librte_stack.so.24.0 00:02:55.522 [383/710] Linking target lib/librte_rib.so.24.0 00:02:55.522 [384/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:55.522 [385/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.522 [386/710] Linking target lib/librte_mldev.so.24.0 00:02:55.522 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:55.780 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.780 [389/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:55.780 [390/710] Linking target lib/librte_security.so.24.0 00:02:55.780 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:56.039 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:56.039 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:56.039 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:56.039 [395/710] Linking static target lib/librte_sched.a 00:02:56.298 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:56.556 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.556 [398/710] Linking target lib/librte_sched.so.24.0 00:02:56.556 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:56.556 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:56.556 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:56.814 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:57.072 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:57.072 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:57.330 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:57.330 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:57.588 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:57.588 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:57.588 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:57.588 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:57.845 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:57.845 [412/710] Linking static target lib/librte_ipsec.a 00:02:57.845 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:58.103 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:58.103 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:58.103 [416/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.103 [417/710] Linking target lib/librte_ipsec.so.24.0 00:02:58.103 [418/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:58.362 [419/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:58.362 [420/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:58.362 [421/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:58.362 [422/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:58.362 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:59.296 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:59.296 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:59.296 [426/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:59.296 [427/710] Linking static target lib/librte_pdcp.a 00:02:59.296 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:59.296 [429/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:59.296 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:59.296 [431/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:59.296 [432/710] Linking static target lib/librte_fib.a 00:02:59.555 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.555 [434/710] Linking target lib/librte_pdcp.so.24.0 00:02:59.555 [435/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.555 [436/710] Linking target lib/librte_fib.so.24.0 00:02:59.814 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:00.072 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:00.331 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:00.331 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:00.331 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:00.331 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:00.589 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:00.589 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:00.848 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:00.848 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:00.848 [447/710] Linking static target lib/librte_port.a 00:03:01.107 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:01.107 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:01.107 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:01.107 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:01.107 [452/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:01.377 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:01.377 [454/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.377 [455/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:01.377 [456/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:01.377 [457/710] Linking target lib/librte_port.so.24.0 00:03:01.377 [458/710] Linking static target lib/librte_pdump.a 00:03:01.650 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:01.650 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.650 [461/710] Linking target lib/librte_pdump.so.24.0 00:03:01.909 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:02.168 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:02.168 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:02.168 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:02.168 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:02.426 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:02.426 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:02.685 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:02.685 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:02.685 [471/710] Linking static target lib/librte_table.a 00:03:02.685 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:02.944 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:03.202 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.202 [475/710] Linking target lib/librte_table.so.24.0 00:03:03.202 [476/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:03.461 [477/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:03.461 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:03.720 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:03.720 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:03.979 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:04.238 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:04.238 [483/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:04.238 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:04.238 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:04.238 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:04.806 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:04.806 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:04.806 [489/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:04.806 [490/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:05.064 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:05.064 [492/710] Linking static target lib/librte_graph.a 00:03:05.064 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:05.632 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.632 [495/710] Linking target lib/librte_graph.so.24.0 00:03:05.632 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:05.632 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:05.632 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:05.632 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:05.892 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:06.151 [501/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:06.151 [502/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:06.151 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:06.151 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:06.151 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:06.409 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:06.676 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:06.676 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:06.935 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:06.935 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:06.935 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:06.935 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:06.935 [513/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:07.194 [514/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:07.194 [515/710] Linking static target lib/librte_node.a 00:03:07.452 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.452 [517/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:07.452 [518/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:07.452 [519/710] Linking target lib/librte_node.so.24.0 00:03:07.452 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:07.452 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:07.711 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:07.711 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:07.711 [524/710] Linking static target drivers/librte_bus_vdev.a 00:03:07.711 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:07.711 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:07.711 [527/710] Linking static target drivers/librte_bus_pci.a 00:03:07.711 [528/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.970 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:07.970 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:07.970 [531/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:07.970 [532/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:07.970 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:07.970 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:07.970 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:08.229 [536/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:08.229 [537/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:08.229 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.229 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:08.229 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:08.229 [541/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:08.229 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:08.229 [543/710] Linking static target drivers/librte_mempool_ring.a 00:03:08.487 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:08.487 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:08.487 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:09.054 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:09.312 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:09.312 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:09.312 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:09.312 [551/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:10.248 [552/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:10.248 [553/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:10.248 [554/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:10.248 [555/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:10.248 [556/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:10.248 [557/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:10.815 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:10.815 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:11.074 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:11.074 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:11.074 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:11.641 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:11.641 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:11.899 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:11.899 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:12.157 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:12.157 [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:12.415 [569/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:12.415 [570/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:12.415 [571/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:12.415 [572/710] Linking static target lib/librte_vhost.a 00:03:12.415 [573/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:12.415 [574/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:12.674 [575/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:12.932 [576/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:12.932 [577/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:12.933 [578/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:12.933 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:13.191 [580/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:13.191 [581/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:13.191 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:13.453 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:13.453 [584/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:13.453 [585/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:13.453 [586/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:13.453 [587/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.453 [588/710] Linking static target drivers/librte_net_i40e.a 00:03:13.453 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:13.453 [590/710] Linking target lib/librte_vhost.so.24.0 00:03:13.453 [591/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:13.732 [592/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:13.732 [593/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:14.006 [594/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:14.006 [595/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.264 [596/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:14.264 [597/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:14.264 [598/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:14.264 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:14.522 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:14.779 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:14.779 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:15.038 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:15.038 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:15.038 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:15.296 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:15.296 [607/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:15.555 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:15.555 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:15.813 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:15.813 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:15.813 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:15.813 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:16.072 [614/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:16.072 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:16.072 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:16.072 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:16.330 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:16.596 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:16.596 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:16.596 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:16.596 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:16.854 [623/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:16.854 [624/710] Linking static target lib/librte_pipeline.a 00:03:16.854 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:17.420 [626/710] Linking target app/dpdk-dumpcap 00:03:17.679 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:17.679 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:17.679 [629/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:17.938 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:17.938 [631/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:17.938 [632/710] Linking target app/dpdk-graph 00:03:18.196 [633/710] Linking target app/dpdk-pdump 00:03:18.196 [634/710] Linking target app/dpdk-proc-info 00:03:18.196 [635/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:18.196 [636/710] Linking target app/dpdk-test-acl 00:03:18.196 [637/710] Linking target app/dpdk-test-compress-perf 00:03:18.455 [638/710] Linking target app/dpdk-test-cmdline 00:03:18.455 [639/710] Linking target app/dpdk-test-crypto-perf 00:03:18.455 [640/710] Linking target app/dpdk-test-dma-perf 00:03:18.455 [641/710] Linking target app/dpdk-test-fib 00:03:19.021 [642/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:19.021 [643/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:19.021 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:19.021 [645/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:19.021 [646/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:19.021 [647/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:19.280 [648/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:19.280 [649/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:19.538 [650/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.538 [651/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:19.538 [652/710] Linking target lib/librte_pipeline.so.24.0 00:03:19.538 [653/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:19.538 [654/710] Linking target app/dpdk-test-gpudev 00:03:19.538 [655/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:19.538 [656/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:19.796 [657/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:19.796 [658/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:19.796 [659/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:20.055 [660/710] Linking target app/dpdk-test-eventdev 00:03:20.055 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:20.055 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:20.055 [663/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:20.314 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:20.314 [665/710] Linking target app/dpdk-test-bbdev 00:03:20.314 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:20.314 [667/710] Linking target app/dpdk-test-flow-perf 00:03:20.582 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:20.582 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:20.846 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:20.846 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:20.846 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:20.846 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:21.103 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:21.104 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:21.104 [676/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:21.104 [677/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:21.361 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:21.620 [679/710] Linking target app/dpdk-test-mldev 00:03:21.620 [680/710] Linking target app/dpdk-test-pipeline 00:03:21.620 [681/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:21.620 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:21.878 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:22.136 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:22.136 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:22.136 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:22.395 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:22.395 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:22.653 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:22.653 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:22.912 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:22.912 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:22.912 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:23.170 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:23.429 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:23.687 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:23.687 [697/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:23.687 [698/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:23.945 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:23.945 [700/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:24.204 [701/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:24.204 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:24.204 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:24.204 [704/710] Linking target app/dpdk-test-regex 00:03:24.462 [705/710] Linking target app/dpdk-test-sad 00:03:24.462 [706/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:24.462 [707/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:25.028 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:25.028 [709/710] Linking target app/dpdk-testpmd 00:03:25.287 [710/710] Linking target app/dpdk-test-security-perf 00:03:25.287 09:50:23 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:25.287 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:25.287 [0/1] Installing files. 00:03:25.546 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:25.546 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:25.546 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:25.546 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:25.546 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:25.546 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:25.546 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:25.546 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:25.546 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.547 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.548 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.809 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.810 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.811 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:25.812 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:25.812 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.812 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.813 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.813 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.813 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.813 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.813 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.813 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.813 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.813 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.813 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.813 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:25.813 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.072 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.072 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.072 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.072 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:26.072 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.072 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:26.072 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.072 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:26.072 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.072 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:26.072 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.072 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.072 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.072 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.072 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.072 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.073 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.073 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.073 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.073 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.073 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.073 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.073 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.073 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.073 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.073 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.073 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.073 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.073 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.073 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.073 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.074 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.335 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.336 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.337 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.337 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:26.337 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:26.337 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:26.337 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:26.337 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:26.337 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:26.337 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:26.337 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:26.337 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:26.337 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:26.337 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:26.337 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:26.337 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:26.337 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:26.337 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:26.337 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:26.337 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:26.337 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:26.337 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:26.337 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:26.337 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:26.337 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:26.337 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:26.337 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:26.337 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:26.337 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:26.337 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:26.337 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:26.337 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:26.337 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:26.337 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:26.337 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:26.337 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:26.337 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:26.337 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:26.337 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:26.337 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:26.337 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:26.337 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:26.337 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:26.337 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:26.337 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:26.337 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:26.337 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:26.337 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:26.337 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:26.337 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:26.337 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:26.337 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:26.337 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:26.337 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:26.337 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:26.337 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:26.337 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:26.337 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:26.337 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:26.337 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:26.337 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:26.337 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:26.337 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:26.337 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:26.337 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:26.337 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:26.337 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:26.337 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:26.337 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:26.337 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:26.337 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:26.337 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:26.337 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:26.337 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:26.337 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:26.337 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:26.337 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:26.337 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:26.337 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:26.337 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:26.337 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:26.337 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:26.337 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:26.337 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:26.337 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:26.337 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:26.337 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:26.337 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:26.337 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:26.337 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:26.337 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:26.337 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:26.337 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:26.337 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:26.337 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:26.337 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:26.337 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:26.337 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:26.337 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:26.337 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:26.337 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:26.338 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:26.338 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:26.338 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:26.338 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:26.338 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:26.338 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:26.338 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:26.338 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:26.338 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:26.338 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:26.338 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:26.338 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:26.338 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:26.338 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:26.338 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:26.338 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:26.338 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:26.338 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:26.338 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:26.338 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:26.338 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:26.338 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:26.338 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:26.338 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:26.338 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:26.338 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:26.338 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:26.338 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:26.338 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:26.338 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:26.338 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:26.338 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:26.338 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:26.338 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:26.338 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:26.338 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:26.338 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:26.338 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:26.338 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:26.338 09:50:24 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:26.338 09:50:24 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:26.338 09:50:24 -- common/autobuild_common.sh@203 -- $ cat 00:03:26.338 ************************************ 00:03:26.338 END TEST build_native_dpdk 00:03:26.338 ************************************ 00:03:26.338 09:50:24 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:26.338 00:03:26.338 real 0m58.928s 00:03:26.338 user 7m8.441s 00:03:26.338 sys 1m7.298s 00:03:26.338 09:50:24 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:26.338 09:50:24 -- common/autotest_common.sh@10 -- $ set +x 00:03:26.338 09:50:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:26.338 09:50:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:26.338 09:50:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:26.338 09:50:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:26.338 09:50:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:26.338 09:50:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:26.338 09:50:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:26.338 09:50:24 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:26.596 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:26.596 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:26.596 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:26.596 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:27.162 Using 'verbs' RDMA provider 00:03:42.607 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:54.813 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:54.813 go version go1.21.1 linux/amd64 00:03:54.813 Creating mk/config.mk...done. 00:03:54.813 Creating mk/cc.flags.mk...done. 00:03:54.813 Type 'make' to build. 00:03:54.813 09:50:51 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:54.813 09:50:51 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:54.813 09:50:51 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:54.813 09:50:51 -- common/autotest_common.sh@10 -- $ set +x 00:03:54.813 ************************************ 00:03:54.813 START TEST make 00:03:54.813 ************************************ 00:03:54.813 09:50:51 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:54.813 make[1]: Nothing to be done for 'all'. 00:04:16.737 CC lib/ut/ut.o 00:04:16.737 CC lib/log/log_flags.o 00:04:16.737 CC lib/log/log.o 00:04:16.737 CC lib/log/log_deprecated.o 00:04:16.737 CC lib/ut_mock/mock.o 00:04:16.737 LIB libspdk_ut_mock.a 00:04:16.737 LIB libspdk_ut.a 00:04:16.737 LIB libspdk_log.a 00:04:16.737 SO libspdk_ut_mock.so.5.0 00:04:16.737 SO libspdk_ut.so.1.0 00:04:16.737 SO libspdk_log.so.6.1 00:04:16.737 SYMLINK libspdk_ut_mock.so 00:04:16.737 SYMLINK libspdk_ut.so 00:04:16.737 SYMLINK libspdk_log.so 00:04:16.737 CC lib/dma/dma.o 00:04:16.737 CC lib/ioat/ioat.o 00:04:16.737 CXX lib/trace_parser/trace.o 00:04:16.737 CC lib/util/base64.o 00:04:16.737 CC lib/util/bit_array.o 00:04:16.737 CC lib/util/cpuset.o 00:04:16.737 CC lib/util/crc32.o 00:04:16.737 CC lib/util/crc16.o 00:04:16.737 CC lib/util/crc32c.o 00:04:16.737 CC lib/vfio_user/host/vfio_user_pci.o 00:04:16.737 CC lib/vfio_user/host/vfio_user.o 00:04:16.737 CC lib/util/crc32_ieee.o 00:04:16.737 CC lib/util/crc64.o 00:04:16.737 CC lib/util/dif.o 00:04:16.737 LIB libspdk_dma.a 00:04:16.737 CC lib/util/fd.o 00:04:16.737 SO libspdk_dma.so.3.0 00:04:16.737 CC lib/util/file.o 00:04:16.737 CC lib/util/hexlify.o 00:04:16.737 SYMLINK libspdk_dma.so 00:04:16.737 CC lib/util/iov.o 00:04:16.737 CC lib/util/math.o 00:04:16.737 LIB libspdk_ioat.a 00:04:16.737 SO libspdk_ioat.so.6.0 00:04:16.737 LIB libspdk_vfio_user.a 00:04:16.737 CC lib/util/pipe.o 00:04:16.737 CC lib/util/strerror_tls.o 00:04:16.737 CC lib/util/string.o 00:04:16.737 SYMLINK libspdk_ioat.so 00:04:16.737 CC lib/util/uuid.o 00:04:16.737 SO libspdk_vfio_user.so.4.0 00:04:16.737 CC lib/util/fd_group.o 00:04:16.737 CC lib/util/xor.o 00:04:16.737 SYMLINK libspdk_vfio_user.so 00:04:16.737 CC lib/util/zipf.o 00:04:16.995 LIB libspdk_util.a 00:04:17.254 SO libspdk_util.so.8.0 00:04:17.254 SYMLINK libspdk_util.so 00:04:17.254 LIB libspdk_trace_parser.a 00:04:17.254 SO libspdk_trace_parser.so.4.0 00:04:17.512 CC lib/idxd/idxd.o 00:04:17.512 CC lib/json/json_parse.o 00:04:17.512 CC lib/idxd/idxd_user.o 00:04:17.512 CC lib/json/json_util.o 00:04:17.512 CC lib/rdma/common.o 00:04:17.512 CC lib/conf/conf.o 00:04:17.512 CC lib/idxd/idxd_kernel.o 00:04:17.512 CC lib/env_dpdk/env.o 00:04:17.512 CC lib/vmd/vmd.o 00:04:17.512 SYMLINK libspdk_trace_parser.so 00:04:17.512 CC lib/vmd/led.o 00:04:17.512 CC lib/env_dpdk/memory.o 00:04:17.512 CC lib/env_dpdk/pci.o 00:04:17.771 LIB libspdk_conf.a 00:04:17.771 CC lib/json/json_write.o 00:04:17.771 CC lib/rdma/rdma_verbs.o 00:04:17.771 CC lib/env_dpdk/init.o 00:04:17.771 SO libspdk_conf.so.5.0 00:04:17.771 CC lib/env_dpdk/threads.o 00:04:17.771 SYMLINK libspdk_conf.so 00:04:17.771 CC lib/env_dpdk/pci_ioat.o 00:04:17.771 CC lib/env_dpdk/pci_virtio.o 00:04:17.771 CC lib/env_dpdk/pci_vmd.o 00:04:17.771 LIB libspdk_rdma.a 00:04:17.771 SO libspdk_rdma.so.5.0 00:04:18.029 LIB libspdk_json.a 00:04:18.029 CC lib/env_dpdk/pci_idxd.o 00:04:18.029 LIB libspdk_idxd.a 00:04:18.030 SO libspdk_json.so.5.1 00:04:18.030 SYMLINK libspdk_rdma.so 00:04:18.030 CC lib/env_dpdk/pci_event.o 00:04:18.030 SO libspdk_idxd.so.11.0 00:04:18.030 CC lib/env_dpdk/sigbus_handler.o 00:04:18.030 CC lib/env_dpdk/pci_dpdk.o 00:04:18.030 SYMLINK libspdk_json.so 00:04:18.030 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:18.030 SYMLINK libspdk_idxd.so 00:04:18.030 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:18.030 LIB libspdk_vmd.a 00:04:18.030 SO libspdk_vmd.so.5.0 00:04:18.030 CC lib/jsonrpc/jsonrpc_server.o 00:04:18.030 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:18.030 CC lib/jsonrpc/jsonrpc_client.o 00:04:18.030 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:18.030 SYMLINK libspdk_vmd.so 00:04:18.288 LIB libspdk_jsonrpc.a 00:04:18.546 SO libspdk_jsonrpc.so.5.1 00:04:18.546 SYMLINK libspdk_jsonrpc.so 00:04:18.546 CC lib/rpc/rpc.o 00:04:18.804 LIB libspdk_env_dpdk.a 00:04:18.804 SO libspdk_env_dpdk.so.13.0 00:04:18.804 LIB libspdk_rpc.a 00:04:18.804 SO libspdk_rpc.so.5.0 00:04:19.063 SYMLINK libspdk_rpc.so 00:04:19.063 SYMLINK libspdk_env_dpdk.so 00:04:19.063 CC lib/trace/trace.o 00:04:19.063 CC lib/trace/trace_flags.o 00:04:19.063 CC lib/trace/trace_rpc.o 00:04:19.063 CC lib/sock/sock.o 00:04:19.063 CC lib/sock/sock_rpc.o 00:04:19.063 CC lib/notify/notify.o 00:04:19.063 CC lib/notify/notify_rpc.o 00:04:19.322 LIB libspdk_notify.a 00:04:19.322 SO libspdk_notify.so.5.0 00:04:19.322 LIB libspdk_trace.a 00:04:19.322 SYMLINK libspdk_notify.so 00:04:19.322 SO libspdk_trace.so.9.0 00:04:19.322 SYMLINK libspdk_trace.so 00:04:19.580 LIB libspdk_sock.a 00:04:19.580 SO libspdk_sock.so.8.0 00:04:19.580 SYMLINK libspdk_sock.so 00:04:19.580 CC lib/thread/thread.o 00:04:19.580 CC lib/thread/iobuf.o 00:04:19.839 CC lib/nvme/nvme_ctrlr.o 00:04:19.839 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:19.839 CC lib/nvme/nvme_fabric.o 00:04:19.839 CC lib/nvme/nvme_ns_cmd.o 00:04:19.839 CC lib/nvme/nvme_pcie_common.o 00:04:19.839 CC lib/nvme/nvme_ns.o 00:04:19.839 CC lib/nvme/nvme_pcie.o 00:04:19.839 CC lib/nvme/nvme_qpair.o 00:04:19.839 CC lib/nvme/nvme.o 00:04:20.406 CC lib/nvme/nvme_quirks.o 00:04:20.406 CC lib/nvme/nvme_transport.o 00:04:20.664 CC lib/nvme/nvme_discovery.o 00:04:20.664 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:20.664 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:20.664 CC lib/nvme/nvme_tcp.o 00:04:20.664 CC lib/nvme/nvme_opal.o 00:04:20.664 CC lib/nvme/nvme_io_msg.o 00:04:20.923 CC lib/nvme/nvme_poll_group.o 00:04:21.192 LIB libspdk_thread.a 00:04:21.192 SO libspdk_thread.so.9.0 00:04:21.192 CC lib/nvme/nvme_zns.o 00:04:21.192 CC lib/nvme/nvme_cuse.o 00:04:21.192 SYMLINK libspdk_thread.so 00:04:21.192 CC lib/nvme/nvme_vfio_user.o 00:04:21.192 CC lib/nvme/nvme_rdma.o 00:04:21.469 CC lib/accel/accel.o 00:04:21.469 CC lib/blob/blobstore.o 00:04:21.469 CC lib/init/json_config.o 00:04:21.727 CC lib/init/subsystem.o 00:04:21.727 CC lib/init/subsystem_rpc.o 00:04:21.727 CC lib/init/rpc.o 00:04:21.727 CC lib/accel/accel_rpc.o 00:04:21.986 CC lib/accel/accel_sw.o 00:04:21.986 CC lib/blob/request.o 00:04:21.986 CC lib/virtio/virtio.o 00:04:21.986 LIB libspdk_init.a 00:04:21.986 SO libspdk_init.so.4.0 00:04:21.986 CC lib/blob/zeroes.o 00:04:21.986 SYMLINK libspdk_init.so 00:04:21.986 CC lib/virtio/virtio_vhost_user.o 00:04:22.244 CC lib/virtio/virtio_vfio_user.o 00:04:22.244 CC lib/virtio/virtio_pci.o 00:04:22.244 CC lib/blob/blob_bs_dev.o 00:04:22.244 CC lib/event/app.o 00:04:22.244 CC lib/event/reactor.o 00:04:22.244 CC lib/event/app_rpc.o 00:04:22.244 CC lib/event/log_rpc.o 00:04:22.244 LIB libspdk_accel.a 00:04:22.244 CC lib/event/scheduler_static.o 00:04:22.503 SO libspdk_accel.so.14.0 00:04:22.503 LIB libspdk_virtio.a 00:04:22.503 SYMLINK libspdk_accel.so 00:04:22.503 SO libspdk_virtio.so.6.0 00:04:22.503 SYMLINK libspdk_virtio.so 00:04:22.503 CC lib/bdev/bdev.o 00:04:22.503 CC lib/bdev/bdev_rpc.o 00:04:22.503 CC lib/bdev/scsi_nvme.o 00:04:22.503 CC lib/bdev/bdev_zone.o 00:04:22.503 CC lib/bdev/part.o 00:04:22.503 LIB libspdk_nvme.a 00:04:22.762 LIB libspdk_event.a 00:04:22.762 SO libspdk_event.so.12.0 00:04:22.762 SO libspdk_nvme.so.12.0 00:04:22.762 SYMLINK libspdk_event.so 00:04:23.020 SYMLINK libspdk_nvme.so 00:04:23.956 LIB libspdk_blob.a 00:04:23.956 SO libspdk_blob.so.10.1 00:04:24.215 SYMLINK libspdk_blob.so 00:04:24.215 CC lib/lvol/lvol.o 00:04:24.215 CC lib/blobfs/tree.o 00:04:24.215 CC lib/blobfs/blobfs.o 00:04:25.151 LIB libspdk_bdev.a 00:04:25.151 SO libspdk_bdev.so.14.0 00:04:25.151 SYMLINK libspdk_bdev.so 00:04:25.151 LIB libspdk_blobfs.a 00:04:25.151 LIB libspdk_lvol.a 00:04:25.151 SO libspdk_blobfs.so.9.0 00:04:25.151 SO libspdk_lvol.so.9.1 00:04:25.151 CC lib/scsi/dev.o 00:04:25.151 CC lib/nbd/nbd.o 00:04:25.151 CC lib/ublk/ublk.o 00:04:25.151 CC lib/nbd/nbd_rpc.o 00:04:25.151 CC lib/nvmf/ctrlr.o 00:04:25.151 CC lib/ftl/ftl_core.o 00:04:25.151 CC lib/scsi/lun.o 00:04:25.151 CC lib/nvmf/ctrlr_discovery.o 00:04:25.151 SYMLINK libspdk_blobfs.so 00:04:25.151 CC lib/nvmf/ctrlr_bdev.o 00:04:25.410 SYMLINK libspdk_lvol.so 00:04:25.410 CC lib/nvmf/subsystem.o 00:04:25.410 CC lib/nvmf/nvmf.o 00:04:25.410 CC lib/nvmf/nvmf_rpc.o 00:04:25.668 CC lib/scsi/port.o 00:04:25.668 LIB libspdk_nbd.a 00:04:25.668 CC lib/ftl/ftl_init.o 00:04:25.668 SO libspdk_nbd.so.6.0 00:04:25.668 CC lib/scsi/scsi.o 00:04:25.668 CC lib/scsi/scsi_bdev.o 00:04:25.668 SYMLINK libspdk_nbd.so 00:04:25.668 CC lib/nvmf/transport.o 00:04:25.926 CC lib/nvmf/tcp.o 00:04:25.926 CC lib/ftl/ftl_layout.o 00:04:25.926 CC lib/ublk/ublk_rpc.o 00:04:25.926 CC lib/nvmf/rdma.o 00:04:25.926 LIB libspdk_ublk.a 00:04:26.185 SO libspdk_ublk.so.2.0 00:04:26.185 SYMLINK libspdk_ublk.so 00:04:26.185 CC lib/ftl/ftl_debug.o 00:04:26.185 CC lib/scsi/scsi_pr.o 00:04:26.185 CC lib/scsi/scsi_rpc.o 00:04:26.185 CC lib/scsi/task.o 00:04:26.185 CC lib/ftl/ftl_io.o 00:04:26.443 CC lib/ftl/ftl_sb.o 00:04:26.443 CC lib/ftl/ftl_l2p.o 00:04:26.443 CC lib/ftl/ftl_l2p_flat.o 00:04:26.443 CC lib/ftl/ftl_nv_cache.o 00:04:26.443 LIB libspdk_scsi.a 00:04:26.443 CC lib/ftl/ftl_band.o 00:04:26.443 SO libspdk_scsi.so.8.0 00:04:26.443 CC lib/ftl/ftl_band_ops.o 00:04:26.443 CC lib/ftl/ftl_writer.o 00:04:26.705 CC lib/ftl/ftl_rq.o 00:04:26.705 CC lib/ftl/ftl_reloc.o 00:04:26.705 SYMLINK libspdk_scsi.so 00:04:26.705 CC lib/iscsi/conn.o 00:04:26.705 CC lib/ftl/ftl_l2p_cache.o 00:04:26.968 CC lib/ftl/ftl_p2l.o 00:04:26.968 CC lib/ftl/mngt/ftl_mngt.o 00:04:26.968 CC lib/iscsi/init_grp.o 00:04:26.968 CC lib/vhost/vhost.o 00:04:26.968 CC lib/vhost/vhost_rpc.o 00:04:27.227 CC lib/iscsi/iscsi.o 00:04:27.227 CC lib/iscsi/md5.o 00:04:27.227 CC lib/iscsi/param.o 00:04:27.227 CC lib/iscsi/portal_grp.o 00:04:27.227 CC lib/vhost/vhost_scsi.o 00:04:27.227 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:27.485 CC lib/iscsi/tgt_node.o 00:04:27.485 CC lib/vhost/vhost_blk.o 00:04:27.485 CC lib/iscsi/iscsi_subsystem.o 00:04:27.485 CC lib/iscsi/iscsi_rpc.o 00:04:27.485 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:27.743 CC lib/vhost/rte_vhost_user.o 00:04:27.743 CC lib/iscsi/task.o 00:04:27.743 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:27.743 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:27.743 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:27.743 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:27.743 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:27.743 LIB libspdk_nvmf.a 00:04:28.002 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:28.002 SO libspdk_nvmf.so.17.0 00:04:28.002 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:28.002 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:28.002 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:28.002 SYMLINK libspdk_nvmf.so 00:04:28.260 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:28.260 CC lib/ftl/utils/ftl_conf.o 00:04:28.260 CC lib/ftl/utils/ftl_md.o 00:04:28.260 CC lib/ftl/utils/ftl_mempool.o 00:04:28.260 CC lib/ftl/utils/ftl_bitmap.o 00:04:28.260 CC lib/ftl/utils/ftl_property.o 00:04:28.260 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:28.260 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:28.260 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:28.518 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:28.518 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:28.518 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:28.518 LIB libspdk_iscsi.a 00:04:28.518 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:28.518 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:28.518 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:28.518 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:28.518 CC lib/ftl/base/ftl_base_dev.o 00:04:28.518 CC lib/ftl/base/ftl_base_bdev.o 00:04:28.518 SO libspdk_iscsi.so.7.0 00:04:28.776 CC lib/ftl/ftl_trace.o 00:04:28.776 LIB libspdk_vhost.a 00:04:28.776 SYMLINK libspdk_iscsi.so 00:04:28.776 SO libspdk_vhost.so.7.1 00:04:28.776 SYMLINK libspdk_vhost.so 00:04:29.034 LIB libspdk_ftl.a 00:04:29.034 SO libspdk_ftl.so.8.0 00:04:29.293 SYMLINK libspdk_ftl.so 00:04:29.551 CC module/env_dpdk/env_dpdk_rpc.o 00:04:29.551 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:29.551 CC module/scheduler/gscheduler/gscheduler.o 00:04:29.551 CC module/sock/posix/posix.o 00:04:29.551 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:29.551 CC module/accel/dsa/accel_dsa.o 00:04:29.551 CC module/accel/error/accel_error.o 00:04:29.551 CC module/accel/ioat/accel_ioat.o 00:04:29.551 CC module/accel/iaa/accel_iaa.o 00:04:29.551 CC module/blob/bdev/blob_bdev.o 00:04:29.551 LIB libspdk_env_dpdk_rpc.a 00:04:29.810 SO libspdk_env_dpdk_rpc.so.5.0 00:04:29.810 LIB libspdk_scheduler_gscheduler.a 00:04:29.810 LIB libspdk_scheduler_dpdk_governor.a 00:04:29.810 SYMLINK libspdk_env_dpdk_rpc.so 00:04:29.810 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:29.810 SO libspdk_scheduler_gscheduler.so.3.0 00:04:29.810 CC module/accel/error/accel_error_rpc.o 00:04:29.810 CC module/accel/iaa/accel_iaa_rpc.o 00:04:29.810 CC module/accel/ioat/accel_ioat_rpc.o 00:04:29.810 LIB libspdk_scheduler_dynamic.a 00:04:29.810 SO libspdk_scheduler_dynamic.so.3.0 00:04:29.810 SYMLINK libspdk_scheduler_gscheduler.so 00:04:29.810 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:29.810 CC module/accel/dsa/accel_dsa_rpc.o 00:04:29.810 SYMLINK libspdk_scheduler_dynamic.so 00:04:29.810 LIB libspdk_blob_bdev.a 00:04:29.810 SO libspdk_blob_bdev.so.10.1 00:04:29.810 LIB libspdk_accel_error.a 00:04:29.810 LIB libspdk_accel_iaa.a 00:04:29.810 LIB libspdk_accel_ioat.a 00:04:29.810 SO libspdk_accel_error.so.1.0 00:04:29.810 SO libspdk_accel_ioat.so.5.0 00:04:29.810 SO libspdk_accel_iaa.so.2.0 00:04:29.810 SYMLINK libspdk_blob_bdev.so 00:04:30.069 LIB libspdk_accel_dsa.a 00:04:30.069 SO libspdk_accel_dsa.so.4.0 00:04:30.069 SYMLINK libspdk_accel_error.so 00:04:30.069 SYMLINK libspdk_accel_iaa.so 00:04:30.069 SYMLINK libspdk_accel_ioat.so 00:04:30.069 SYMLINK libspdk_accel_dsa.so 00:04:30.069 CC module/bdev/error/vbdev_error.o 00:04:30.069 CC module/blobfs/bdev/blobfs_bdev.o 00:04:30.069 CC module/bdev/gpt/gpt.o 00:04:30.069 CC module/bdev/lvol/vbdev_lvol.o 00:04:30.069 CC module/bdev/nvme/bdev_nvme.o 00:04:30.069 CC module/bdev/malloc/bdev_malloc.o 00:04:30.069 CC module/bdev/null/bdev_null.o 00:04:30.069 CC module/bdev/delay/vbdev_delay.o 00:04:30.069 CC module/bdev/passthru/vbdev_passthru.o 00:04:30.327 LIB libspdk_sock_posix.a 00:04:30.327 SO libspdk_sock_posix.so.5.0 00:04:30.327 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:30.327 CC module/bdev/gpt/vbdev_gpt.o 00:04:30.327 SYMLINK libspdk_sock_posix.so 00:04:30.327 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:30.327 CC module/bdev/null/bdev_null_rpc.o 00:04:30.327 CC module/bdev/error/vbdev_error_rpc.o 00:04:30.586 LIB libspdk_blobfs_bdev.a 00:04:30.586 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:30.586 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:30.586 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:30.586 SO libspdk_blobfs_bdev.so.5.0 00:04:30.586 LIB libspdk_bdev_malloc.a 00:04:30.586 LIB libspdk_bdev_error.a 00:04:30.586 LIB libspdk_bdev_null.a 00:04:30.586 SO libspdk_bdev_malloc.so.5.0 00:04:30.586 SO libspdk_bdev_error.so.5.0 00:04:30.586 SYMLINK libspdk_blobfs_bdev.so 00:04:30.586 SO libspdk_bdev_null.so.5.0 00:04:30.586 LIB libspdk_bdev_gpt.a 00:04:30.586 SYMLINK libspdk_bdev_malloc.so 00:04:30.586 SO libspdk_bdev_gpt.so.5.0 00:04:30.586 SYMLINK libspdk_bdev_error.so 00:04:30.586 SYMLINK libspdk_bdev_null.so 00:04:30.586 LIB libspdk_bdev_passthru.a 00:04:30.586 LIB libspdk_bdev_delay.a 00:04:30.586 SO libspdk_bdev_passthru.so.5.0 00:04:30.586 SYMLINK libspdk_bdev_gpt.so 00:04:30.586 SO libspdk_bdev_delay.so.5.0 00:04:30.586 CC module/bdev/raid/bdev_raid.o 00:04:30.586 CC module/bdev/ftl/bdev_ftl.o 00:04:30.845 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:30.845 CC module/bdev/aio/bdev_aio.o 00:04:30.845 CC module/bdev/split/vbdev_split.o 00:04:30.845 SYMLINK libspdk_bdev_delay.so 00:04:30.845 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:30.845 SYMLINK libspdk_bdev_passthru.so 00:04:30.845 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:30.845 CC module/bdev/iscsi/bdev_iscsi.o 00:04:30.845 LIB libspdk_bdev_lvol.a 00:04:30.845 SO libspdk_bdev_lvol.so.5.0 00:04:31.103 CC module/bdev/split/vbdev_split_rpc.o 00:04:31.103 CC module/bdev/aio/bdev_aio_rpc.o 00:04:31.103 LIB libspdk_bdev_ftl.a 00:04:31.103 SO libspdk_bdev_ftl.so.5.0 00:04:31.103 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:31.103 SYMLINK libspdk_bdev_lvol.so 00:04:31.103 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:31.103 CC module/bdev/raid/bdev_raid_rpc.o 00:04:31.103 SYMLINK libspdk_bdev_ftl.so 00:04:31.103 CC module/bdev/raid/bdev_raid_sb.o 00:04:31.103 LIB libspdk_bdev_split.a 00:04:31.103 SO libspdk_bdev_split.so.5.0 00:04:31.103 LIB libspdk_bdev_aio.a 00:04:31.103 LIB libspdk_bdev_iscsi.a 00:04:31.103 LIB libspdk_bdev_zone_block.a 00:04:31.362 SO libspdk_bdev_aio.so.5.0 00:04:31.362 SO libspdk_bdev_iscsi.so.5.0 00:04:31.362 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:31.362 SO libspdk_bdev_zone_block.so.5.0 00:04:31.362 SYMLINK libspdk_bdev_split.so 00:04:31.362 CC module/bdev/raid/raid0.o 00:04:31.362 SYMLINK libspdk_bdev_aio.so 00:04:31.362 CC module/bdev/raid/raid1.o 00:04:31.362 SYMLINK libspdk_bdev_iscsi.so 00:04:31.362 CC module/bdev/raid/concat.o 00:04:31.362 SYMLINK libspdk_bdev_zone_block.so 00:04:31.362 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:31.362 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:31.362 CC module/bdev/nvme/nvme_rpc.o 00:04:31.362 CC module/bdev/nvme/bdev_mdns_client.o 00:04:31.621 CC module/bdev/nvme/vbdev_opal.o 00:04:31.621 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:31.621 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:31.621 LIB libspdk_bdev_raid.a 00:04:31.621 SO libspdk_bdev_raid.so.5.0 00:04:31.621 SYMLINK libspdk_bdev_raid.so 00:04:31.879 LIB libspdk_bdev_virtio.a 00:04:31.879 SO libspdk_bdev_virtio.so.5.0 00:04:31.879 SYMLINK libspdk_bdev_virtio.so 00:04:32.138 LIB libspdk_bdev_nvme.a 00:04:32.396 SO libspdk_bdev_nvme.so.6.0 00:04:32.396 SYMLINK libspdk_bdev_nvme.so 00:04:32.655 CC module/event/subsystems/sock/sock.o 00:04:32.655 CC module/event/subsystems/vmd/vmd.o 00:04:32.655 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:32.655 CC module/event/subsystems/iobuf/iobuf.o 00:04:32.655 CC module/event/subsystems/scheduler/scheduler.o 00:04:32.655 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:32.655 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:32.655 LIB libspdk_event_sock.a 00:04:32.913 LIB libspdk_event_vhost_blk.a 00:04:32.913 LIB libspdk_event_vmd.a 00:04:32.913 LIB libspdk_event_scheduler.a 00:04:32.913 LIB libspdk_event_iobuf.a 00:04:32.913 SO libspdk_event_vhost_blk.so.2.0 00:04:32.913 SO libspdk_event_sock.so.4.0 00:04:32.913 SO libspdk_event_vmd.so.5.0 00:04:32.913 SO libspdk_event_scheduler.so.3.0 00:04:32.913 SO libspdk_event_iobuf.so.2.0 00:04:32.913 SYMLINK libspdk_event_vhost_blk.so 00:04:32.913 SYMLINK libspdk_event_sock.so 00:04:32.913 SYMLINK libspdk_event_scheduler.so 00:04:32.913 SYMLINK libspdk_event_vmd.so 00:04:32.913 SYMLINK libspdk_event_iobuf.so 00:04:33.172 CC module/event/subsystems/accel/accel.o 00:04:33.172 LIB libspdk_event_accel.a 00:04:33.172 SO libspdk_event_accel.so.5.0 00:04:33.172 SYMLINK libspdk_event_accel.so 00:04:33.431 CC module/event/subsystems/bdev/bdev.o 00:04:33.690 LIB libspdk_event_bdev.a 00:04:33.690 SO libspdk_event_bdev.so.5.0 00:04:33.690 SYMLINK libspdk_event_bdev.so 00:04:33.948 CC module/event/subsystems/nbd/nbd.o 00:04:33.948 CC module/event/subsystems/scsi/scsi.o 00:04:33.948 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:33.948 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:33.948 CC module/event/subsystems/ublk/ublk.o 00:04:33.948 LIB libspdk_event_nbd.a 00:04:33.948 LIB libspdk_event_ublk.a 00:04:33.948 LIB libspdk_event_scsi.a 00:04:34.207 SO libspdk_event_nbd.so.5.0 00:04:34.207 SO libspdk_event_ublk.so.2.0 00:04:34.207 SO libspdk_event_scsi.so.5.0 00:04:34.207 SYMLINK libspdk_event_nbd.so 00:04:34.207 SYMLINK libspdk_event_ublk.so 00:04:34.207 SYMLINK libspdk_event_scsi.so 00:04:34.207 LIB libspdk_event_nvmf.a 00:04:34.207 SO libspdk_event_nvmf.so.5.0 00:04:34.207 SYMLINK libspdk_event_nvmf.so 00:04:34.207 CC module/event/subsystems/iscsi/iscsi.o 00:04:34.207 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:34.466 LIB libspdk_event_vhost_scsi.a 00:04:34.466 LIB libspdk_event_iscsi.a 00:04:34.466 SO libspdk_event_vhost_scsi.so.2.0 00:04:34.466 SO libspdk_event_iscsi.so.5.0 00:04:34.466 SYMLINK libspdk_event_vhost_scsi.so 00:04:34.725 SYMLINK libspdk_event_iscsi.so 00:04:34.725 SO libspdk.so.5.0 00:04:34.725 SYMLINK libspdk.so 00:04:34.983 CC app/trace_record/trace_record.o 00:04:34.984 CXX app/trace/trace.o 00:04:34.984 CC examples/nvme/hello_world/hello_world.o 00:04:34.984 CC app/nvmf_tgt/nvmf_main.o 00:04:34.984 CC examples/ioat/perf/perf.o 00:04:34.984 CC examples/accel/perf/accel_perf.o 00:04:34.984 CC examples/bdev/hello_world/hello_bdev.o 00:04:34.984 CC test/app/bdev_svc/bdev_svc.o 00:04:34.984 CC examples/blob/hello_world/hello_blob.o 00:04:34.984 CC test/accel/dif/dif.o 00:04:35.242 LINK spdk_trace_record 00:04:35.242 LINK nvmf_tgt 00:04:35.242 LINK bdev_svc 00:04:35.242 LINK hello_world 00:04:35.242 LINK ioat_perf 00:04:35.242 LINK hello_blob 00:04:35.242 LINK hello_bdev 00:04:35.242 LINK spdk_trace 00:04:35.501 CC examples/ioat/verify/verify.o 00:04:35.501 CC examples/nvme/reconnect/reconnect.o 00:04:35.501 LINK dif 00:04:35.501 LINK accel_perf 00:04:35.501 CC examples/blob/cli/blobcli.o 00:04:35.501 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:35.501 CC app/iscsi_tgt/iscsi_tgt.o 00:04:35.501 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:35.501 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:35.501 CC examples/bdev/bdevperf/bdevperf.o 00:04:35.501 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:35.501 LINK verify 00:04:35.759 LINK iscsi_tgt 00:04:35.759 CC test/bdev/bdevio/bdevio.o 00:04:35.759 LINK reconnect 00:04:35.759 CC test/blobfs/mkfs/mkfs.o 00:04:35.759 CC app/spdk_tgt/spdk_tgt.o 00:04:36.018 LINK blobcli 00:04:36.018 LINK nvme_fuzz 00:04:36.018 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:36.018 LINK vhost_fuzz 00:04:36.018 CC test/app/histogram_perf/histogram_perf.o 00:04:36.018 LINK spdk_tgt 00:04:36.018 LINK mkfs 00:04:36.018 LINK bdevio 00:04:36.018 CC examples/nvme/arbitration/arbitration.o 00:04:36.018 CC app/spdk_lspci/spdk_lspci.o 00:04:36.018 LINK histogram_perf 00:04:36.276 CC test/app/jsoncat/jsoncat.o 00:04:36.276 CC test/app/stub/stub.o 00:04:36.276 LINK spdk_lspci 00:04:36.276 LINK bdevperf 00:04:36.277 TEST_HEADER include/spdk/accel.h 00:04:36.277 TEST_HEADER include/spdk/accel_module.h 00:04:36.277 TEST_HEADER include/spdk/assert.h 00:04:36.277 TEST_HEADER include/spdk/barrier.h 00:04:36.277 TEST_HEADER include/spdk/base64.h 00:04:36.277 TEST_HEADER include/spdk/bdev.h 00:04:36.277 TEST_HEADER include/spdk/bdev_module.h 00:04:36.277 TEST_HEADER include/spdk/bdev_zone.h 00:04:36.277 TEST_HEADER include/spdk/bit_array.h 00:04:36.277 TEST_HEADER include/spdk/bit_pool.h 00:04:36.277 TEST_HEADER include/spdk/blob_bdev.h 00:04:36.277 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:36.277 TEST_HEADER include/spdk/blobfs.h 00:04:36.277 TEST_HEADER include/spdk/blob.h 00:04:36.277 TEST_HEADER include/spdk/conf.h 00:04:36.277 TEST_HEADER include/spdk/config.h 00:04:36.277 TEST_HEADER include/spdk/cpuset.h 00:04:36.277 TEST_HEADER include/spdk/crc16.h 00:04:36.277 TEST_HEADER include/spdk/crc32.h 00:04:36.277 TEST_HEADER include/spdk/crc64.h 00:04:36.277 TEST_HEADER include/spdk/dif.h 00:04:36.277 LINK jsoncat 00:04:36.277 TEST_HEADER include/spdk/dma.h 00:04:36.277 TEST_HEADER include/spdk/endian.h 00:04:36.277 TEST_HEADER include/spdk/env_dpdk.h 00:04:36.277 TEST_HEADER include/spdk/env.h 00:04:36.277 TEST_HEADER include/spdk/event.h 00:04:36.277 TEST_HEADER include/spdk/fd_group.h 00:04:36.277 TEST_HEADER include/spdk/fd.h 00:04:36.277 TEST_HEADER include/spdk/file.h 00:04:36.277 TEST_HEADER include/spdk/ftl.h 00:04:36.277 TEST_HEADER include/spdk/gpt_spec.h 00:04:36.277 TEST_HEADER include/spdk/hexlify.h 00:04:36.277 TEST_HEADER include/spdk/histogram_data.h 00:04:36.277 TEST_HEADER include/spdk/idxd.h 00:04:36.277 TEST_HEADER include/spdk/idxd_spec.h 00:04:36.277 TEST_HEADER include/spdk/init.h 00:04:36.277 TEST_HEADER include/spdk/ioat.h 00:04:36.277 TEST_HEADER include/spdk/ioat_spec.h 00:04:36.277 TEST_HEADER include/spdk/iscsi_spec.h 00:04:36.277 TEST_HEADER include/spdk/json.h 00:04:36.277 TEST_HEADER include/spdk/jsonrpc.h 00:04:36.277 TEST_HEADER include/spdk/likely.h 00:04:36.277 TEST_HEADER include/spdk/log.h 00:04:36.277 TEST_HEADER include/spdk/lvol.h 00:04:36.277 TEST_HEADER include/spdk/memory.h 00:04:36.277 CC examples/sock/hello_world/hello_sock.o 00:04:36.277 TEST_HEADER include/spdk/mmio.h 00:04:36.277 TEST_HEADER include/spdk/nbd.h 00:04:36.277 TEST_HEADER include/spdk/notify.h 00:04:36.546 CC examples/vmd/lsvmd/lsvmd.o 00:04:36.546 TEST_HEADER include/spdk/nvme.h 00:04:36.546 TEST_HEADER include/spdk/nvme_intel.h 00:04:36.546 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:36.546 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:36.546 TEST_HEADER include/spdk/nvme_spec.h 00:04:36.546 TEST_HEADER include/spdk/nvme_zns.h 00:04:36.546 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:36.546 LINK stub 00:04:36.546 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:36.546 TEST_HEADER include/spdk/nvmf.h 00:04:36.546 TEST_HEADER include/spdk/nvmf_spec.h 00:04:36.546 TEST_HEADER include/spdk/nvmf_transport.h 00:04:36.546 TEST_HEADER include/spdk/opal.h 00:04:36.546 TEST_HEADER include/spdk/opal_spec.h 00:04:36.546 LINK nvme_manage 00:04:36.546 TEST_HEADER include/spdk/pci_ids.h 00:04:36.546 TEST_HEADER include/spdk/pipe.h 00:04:36.546 LINK arbitration 00:04:36.546 TEST_HEADER include/spdk/queue.h 00:04:36.546 TEST_HEADER include/spdk/reduce.h 00:04:36.546 TEST_HEADER include/spdk/rpc.h 00:04:36.546 TEST_HEADER include/spdk/scheduler.h 00:04:36.546 TEST_HEADER include/spdk/scsi.h 00:04:36.546 TEST_HEADER include/spdk/scsi_spec.h 00:04:36.546 TEST_HEADER include/spdk/sock.h 00:04:36.546 TEST_HEADER include/spdk/stdinc.h 00:04:36.546 TEST_HEADER include/spdk/string.h 00:04:36.546 TEST_HEADER include/spdk/thread.h 00:04:36.546 TEST_HEADER include/spdk/trace.h 00:04:36.546 TEST_HEADER include/spdk/trace_parser.h 00:04:36.546 TEST_HEADER include/spdk/tree.h 00:04:36.546 TEST_HEADER include/spdk/ublk.h 00:04:36.546 TEST_HEADER include/spdk/util.h 00:04:36.546 TEST_HEADER include/spdk/uuid.h 00:04:36.546 TEST_HEADER include/spdk/version.h 00:04:36.546 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:36.546 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:36.546 TEST_HEADER include/spdk/vhost.h 00:04:36.546 CC app/spdk_nvme_perf/perf.o 00:04:36.546 TEST_HEADER include/spdk/vmd.h 00:04:36.546 TEST_HEADER include/spdk/xor.h 00:04:36.546 TEST_HEADER include/spdk/zipf.h 00:04:36.546 CXX test/cpp_headers/accel.o 00:04:36.546 CC app/spdk_nvme_identify/identify.o 00:04:36.546 CC app/spdk_nvme_discover/discovery_aer.o 00:04:36.546 LINK lsvmd 00:04:36.546 CXX test/cpp_headers/accel_module.o 00:04:36.546 CXX test/cpp_headers/assert.o 00:04:36.546 LINK hello_sock 00:04:36.546 CC examples/nvme/hotplug/hotplug.o 00:04:36.819 CC app/spdk_top/spdk_top.o 00:04:36.819 LINK spdk_nvme_discover 00:04:36.819 CXX test/cpp_headers/barrier.o 00:04:36.819 CC examples/vmd/led/led.o 00:04:36.819 CXX test/cpp_headers/base64.o 00:04:36.819 CC app/vhost/vhost.o 00:04:36.819 LINK hotplug 00:04:36.819 CXX test/cpp_headers/bdev.o 00:04:36.819 LINK led 00:04:37.077 CC app/spdk_dd/spdk_dd.o 00:04:37.077 CC app/fio/nvme/fio_plugin.o 00:04:37.077 LINK vhost 00:04:37.077 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:37.077 CXX test/cpp_headers/bdev_module.o 00:04:37.077 LINK iscsi_fuzz 00:04:37.077 CC app/fio/bdev/fio_plugin.o 00:04:37.336 CXX test/cpp_headers/bdev_zone.o 00:04:37.336 LINK spdk_nvme_perf 00:04:37.336 LINK spdk_nvme_identify 00:04:37.336 LINK cmb_copy 00:04:37.336 CC examples/nvme/abort/abort.o 00:04:37.336 LINK spdk_dd 00:04:37.336 CXX test/cpp_headers/bit_array.o 00:04:37.594 LINK spdk_top 00:04:37.594 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:37.594 CXX test/cpp_headers/bit_pool.o 00:04:37.594 CC test/dma/test_dma/test_dma.o 00:04:37.594 CC examples/util/zipf/zipf.o 00:04:37.594 LINK spdk_nvme 00:04:37.594 CXX test/cpp_headers/blob_bdev.o 00:04:37.594 CC examples/nvmf/nvmf/nvmf.o 00:04:37.594 LINK spdk_bdev 00:04:37.594 CXX test/cpp_headers/blobfs_bdev.o 00:04:37.594 CXX test/cpp_headers/blobfs.o 00:04:37.594 LINK pmr_persistence 00:04:37.594 LINK abort 00:04:37.853 LINK zipf 00:04:37.853 CXX test/cpp_headers/blob.o 00:04:37.853 CC test/env/vtophys/vtophys.o 00:04:37.853 CXX test/cpp_headers/conf.o 00:04:37.853 CXX test/cpp_headers/config.o 00:04:37.853 CXX test/cpp_headers/cpuset.o 00:04:37.853 LINK nvmf 00:04:37.853 CXX test/cpp_headers/crc16.o 00:04:37.853 CC test/env/mem_callbacks/mem_callbacks.o 00:04:37.853 CXX test/cpp_headers/crc32.o 00:04:37.853 LINK test_dma 00:04:37.853 LINK vtophys 00:04:38.111 CC examples/thread/thread/thread_ex.o 00:04:38.111 CXX test/cpp_headers/crc64.o 00:04:38.111 CC examples/idxd/perf/perf.o 00:04:38.111 CXX test/cpp_headers/dif.o 00:04:38.111 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:38.111 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:38.111 CC test/event/event_perf/event_perf.o 00:04:38.370 LINK thread 00:04:38.370 CXX test/cpp_headers/dma.o 00:04:38.370 LINK env_dpdk_post_init 00:04:38.370 CC test/lvol/esnap/esnap.o 00:04:38.370 LINK event_perf 00:04:38.370 LINK interrupt_tgt 00:04:38.370 CC test/nvme/aer/aer.o 00:04:38.370 LINK idxd_perf 00:04:38.370 CXX test/cpp_headers/endian.o 00:04:38.370 CXX test/cpp_headers/env_dpdk.o 00:04:38.370 LINK mem_callbacks 00:04:38.628 CC test/event/reactor/reactor.o 00:04:38.628 CC test/event/reactor_perf/reactor_perf.o 00:04:38.628 CXX test/cpp_headers/env.o 00:04:38.628 CC test/event/app_repeat/app_repeat.o 00:04:38.628 CXX test/cpp_headers/event.o 00:04:38.628 LINK aer 00:04:38.628 CC test/event/scheduler/scheduler.o 00:04:38.628 LINK reactor 00:04:38.628 LINK reactor_perf 00:04:38.628 CC test/env/memory/memory_ut.o 00:04:38.628 CXX test/cpp_headers/fd_group.o 00:04:38.628 LINK app_repeat 00:04:38.887 CXX test/cpp_headers/fd.o 00:04:38.887 CC test/nvme/reset/reset.o 00:04:38.887 CC test/rpc_client/rpc_client_test.o 00:04:38.887 CXX test/cpp_headers/file.o 00:04:38.887 CXX test/cpp_headers/ftl.o 00:04:38.887 CC test/env/pci/pci_ut.o 00:04:38.887 LINK scheduler 00:04:38.887 CXX test/cpp_headers/gpt_spec.o 00:04:39.146 LINK rpc_client_test 00:04:39.146 CXX test/cpp_headers/hexlify.o 00:04:39.146 CXX test/cpp_headers/histogram_data.o 00:04:39.146 LINK reset 00:04:39.146 CXX test/cpp_headers/idxd.o 00:04:39.404 CXX test/cpp_headers/idxd_spec.o 00:04:39.404 CC test/nvme/e2edp/nvme_dp.o 00:04:39.404 CC test/nvme/sgl/sgl.o 00:04:39.404 CXX test/cpp_headers/init.o 00:04:39.404 LINK pci_ut 00:04:39.404 CC test/nvme/overhead/overhead.o 00:04:39.404 CXX test/cpp_headers/ioat.o 00:04:39.404 CC test/nvme/err_injection/err_injection.o 00:04:39.663 CXX test/cpp_headers/ioat_spec.o 00:04:39.663 LINK sgl 00:04:39.663 LINK nvme_dp 00:04:39.663 LINK overhead 00:04:39.663 CC test/nvme/startup/startup.o 00:04:39.663 CC test/thread/poller_perf/poller_perf.o 00:04:39.663 LINK memory_ut 00:04:39.663 CXX test/cpp_headers/iscsi_spec.o 00:04:39.663 LINK err_injection 00:04:39.663 CXX test/cpp_headers/json.o 00:04:39.921 CXX test/cpp_headers/jsonrpc.o 00:04:39.921 CC test/nvme/reserve/reserve.o 00:04:39.921 LINK poller_perf 00:04:39.921 LINK startup 00:04:39.921 CXX test/cpp_headers/likely.o 00:04:39.921 CXX test/cpp_headers/log.o 00:04:39.921 CC test/nvme/simple_copy/simple_copy.o 00:04:39.921 CC test/nvme/connect_stress/connect_stress.o 00:04:39.921 CC test/nvme/boot_partition/boot_partition.o 00:04:39.921 CXX test/cpp_headers/lvol.o 00:04:39.921 CXX test/cpp_headers/memory.o 00:04:39.921 CXX test/cpp_headers/mmio.o 00:04:39.921 LINK reserve 00:04:40.180 CC test/nvme/compliance/nvme_compliance.o 00:04:40.180 LINK simple_copy 00:04:40.180 LINK boot_partition 00:04:40.180 CXX test/cpp_headers/nbd.o 00:04:40.180 LINK connect_stress 00:04:40.180 CXX test/cpp_headers/notify.o 00:04:40.180 CXX test/cpp_headers/nvme.o 00:04:40.180 CC test/nvme/fused_ordering/fused_ordering.o 00:04:40.180 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:40.437 CXX test/cpp_headers/nvme_intel.o 00:04:40.437 CC test/nvme/fdp/fdp.o 00:04:40.437 CXX test/cpp_headers/nvme_ocssd.o 00:04:40.437 LINK nvme_compliance 00:04:40.437 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:40.437 LINK fused_ordering 00:04:40.437 CC test/nvme/cuse/cuse.o 00:04:40.437 LINK doorbell_aers 00:04:40.437 CXX test/cpp_headers/nvme_spec.o 00:04:40.696 CXX test/cpp_headers/nvme_zns.o 00:04:40.696 CXX test/cpp_headers/nvmf_cmd.o 00:04:40.696 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:40.696 CXX test/cpp_headers/nvmf.o 00:04:40.696 CXX test/cpp_headers/nvmf_spec.o 00:04:40.696 LINK fdp 00:04:40.696 CXX test/cpp_headers/nvmf_transport.o 00:04:40.954 CXX test/cpp_headers/opal.o 00:04:40.954 CXX test/cpp_headers/opal_spec.o 00:04:40.954 CXX test/cpp_headers/pci_ids.o 00:04:40.954 CXX test/cpp_headers/pipe.o 00:04:40.954 CXX test/cpp_headers/queue.o 00:04:40.954 CXX test/cpp_headers/reduce.o 00:04:40.954 CXX test/cpp_headers/rpc.o 00:04:40.954 CXX test/cpp_headers/scheduler.o 00:04:41.213 CXX test/cpp_headers/scsi.o 00:04:41.213 CXX test/cpp_headers/scsi_spec.o 00:04:41.213 CXX test/cpp_headers/sock.o 00:04:41.213 CXX test/cpp_headers/stdinc.o 00:04:41.213 CXX test/cpp_headers/string.o 00:04:41.213 CXX test/cpp_headers/thread.o 00:04:41.213 CXX test/cpp_headers/trace.o 00:04:41.213 CXX test/cpp_headers/trace_parser.o 00:04:41.213 CXX test/cpp_headers/tree.o 00:04:41.213 CXX test/cpp_headers/ublk.o 00:04:41.213 CXX test/cpp_headers/util.o 00:04:41.213 CXX test/cpp_headers/uuid.o 00:04:41.213 CXX test/cpp_headers/version.o 00:04:41.213 CXX test/cpp_headers/vfio_user_pci.o 00:04:41.213 CXX test/cpp_headers/vfio_user_spec.o 00:04:41.471 CXX test/cpp_headers/vhost.o 00:04:41.471 CXX test/cpp_headers/vmd.o 00:04:41.471 CXX test/cpp_headers/xor.o 00:04:41.471 CXX test/cpp_headers/zipf.o 00:04:41.471 LINK cuse 00:04:42.846 LINK esnap 00:04:44.749 00:04:44.749 real 0m50.961s 00:04:44.749 user 4m57.906s 00:04:44.749 sys 1m3.541s 00:04:44.749 09:51:42 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:44.749 ************************************ 00:04:44.749 END TEST make 00:04:44.749 ************************************ 00:04:44.749 09:51:42 -- common/autotest_common.sh@10 -- $ set +x 00:04:44.749 09:51:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:44.749 09:51:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:44.749 09:51:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:44.749 09:51:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:44.749 09:51:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:44.749 09:51:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:44.749 09:51:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:44.749 09:51:43 -- scripts/common.sh@335 -- # IFS=.-: 00:04:44.749 09:51:43 -- scripts/common.sh@335 -- # read -ra ver1 00:04:44.749 09:51:43 -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.749 09:51:43 -- scripts/common.sh@336 -- # read -ra ver2 00:04:44.749 09:51:43 -- scripts/common.sh@337 -- # local 'op=<' 00:04:44.749 09:51:43 -- scripts/common.sh@339 -- # ver1_l=2 00:04:44.749 09:51:43 -- scripts/common.sh@340 -- # ver2_l=1 00:04:44.749 09:51:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:44.749 09:51:43 -- scripts/common.sh@343 -- # case "$op" in 00:04:44.749 09:51:43 -- scripts/common.sh@344 -- # : 1 00:04:44.749 09:51:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:44.749 09:51:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.749 09:51:43 -- scripts/common.sh@364 -- # decimal 1 00:04:44.749 09:51:43 -- scripts/common.sh@352 -- # local d=1 00:04:44.749 09:51:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.749 09:51:43 -- scripts/common.sh@354 -- # echo 1 00:04:44.749 09:51:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:44.749 09:51:43 -- scripts/common.sh@365 -- # decimal 2 00:04:44.749 09:51:43 -- scripts/common.sh@352 -- # local d=2 00:04:44.749 09:51:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.749 09:51:43 -- scripts/common.sh@354 -- # echo 2 00:04:44.749 09:51:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:44.749 09:51:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:44.749 09:51:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:44.749 09:51:43 -- scripts/common.sh@367 -- # return 0 00:04:44.749 09:51:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.749 09:51:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:44.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.749 --rc genhtml_branch_coverage=1 00:04:44.749 --rc genhtml_function_coverage=1 00:04:44.749 --rc genhtml_legend=1 00:04:44.749 --rc geninfo_all_blocks=1 00:04:44.749 --rc geninfo_unexecuted_blocks=1 00:04:44.749 00:04:44.749 ' 00:04:44.749 09:51:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:44.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.749 --rc genhtml_branch_coverage=1 00:04:44.749 --rc genhtml_function_coverage=1 00:04:44.749 --rc genhtml_legend=1 00:04:44.749 --rc geninfo_all_blocks=1 00:04:44.749 --rc geninfo_unexecuted_blocks=1 00:04:44.749 00:04:44.749 ' 00:04:44.749 09:51:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:44.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.749 --rc genhtml_branch_coverage=1 00:04:44.749 --rc genhtml_function_coverage=1 00:04:44.749 --rc genhtml_legend=1 00:04:44.749 --rc geninfo_all_blocks=1 00:04:44.749 --rc geninfo_unexecuted_blocks=1 00:04:44.749 00:04:44.749 ' 00:04:44.749 09:51:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:44.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.749 --rc genhtml_branch_coverage=1 00:04:44.749 --rc genhtml_function_coverage=1 00:04:44.749 --rc genhtml_legend=1 00:04:44.749 --rc geninfo_all_blocks=1 00:04:44.749 --rc geninfo_unexecuted_blocks=1 00:04:44.749 00:04:44.749 ' 00:04:44.749 09:51:43 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:44.749 09:51:43 -- nvmf/common.sh@7 -- # uname -s 00:04:44.749 09:51:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.749 09:51:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.749 09:51:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.749 09:51:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.749 09:51:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.749 09:51:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.749 09:51:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.749 09:51:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.749 09:51:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.749 09:51:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.749 09:51:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:04:44.749 09:51:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:04:44.749 09:51:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.749 09:51:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.749 09:51:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:44.749 09:51:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:44.749 09:51:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.749 09:51:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.750 09:51:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.750 09:51:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.750 09:51:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.750 09:51:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.750 09:51:43 -- paths/export.sh@5 -- # export PATH 00:04:44.750 09:51:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.750 09:51:43 -- nvmf/common.sh@46 -- # : 0 00:04:44.750 09:51:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:44.750 09:51:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:44.750 09:51:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:44.750 09:51:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.750 09:51:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.750 09:51:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:44.750 09:51:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:44.750 09:51:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:44.750 09:51:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:44.750 09:51:43 -- spdk/autotest.sh@32 -- # uname -s 00:04:44.750 09:51:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:44.750 09:51:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:44.750 09:51:43 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:44.750 09:51:43 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:44.750 09:51:43 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:44.750 09:51:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:44.750 09:51:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:44.750 09:51:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:44.750 09:51:43 -- spdk/autotest.sh@48 -- # udevadm_pid=61845 00:04:44.750 09:51:43 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:44.750 09:51:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:44.750 09:51:43 -- spdk/autotest.sh@54 -- # echo 61853 00:04:44.750 09:51:43 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:44.750 09:51:43 -- spdk/autotest.sh@56 -- # echo 61856 00:04:44.750 09:51:43 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:44.750 09:51:43 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:44.750 09:51:43 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:44.750 09:51:43 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:44.750 09:51:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.750 09:51:43 -- common/autotest_common.sh@10 -- # set +x 00:04:44.750 09:51:43 -- spdk/autotest.sh@70 -- # create_test_list 00:04:44.750 09:51:43 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:44.750 09:51:43 -- common/autotest_common.sh@10 -- # set +x 00:04:44.750 09:51:43 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:44.750 09:51:43 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:44.750 09:51:43 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:44.750 09:51:43 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:44.750 09:51:43 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:44.750 09:51:43 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:44.750 09:51:43 -- common/autotest_common.sh@1450 -- # uname 00:04:44.750 09:51:43 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:44.750 09:51:43 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:44.750 09:51:43 -- common/autotest_common.sh@1470 -- # uname 00:04:44.750 09:51:43 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:44.750 09:51:43 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:44.750 09:51:43 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:44.750 lcov: LCOV version 1.15 00:04:44.750 09:51:43 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:52.858 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:52.858 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:52.858 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:52.858 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:52.858 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:52.858 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:14.811 09:52:09 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:14.811 09:52:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:14.811 09:52:09 -- common/autotest_common.sh@10 -- # set +x 00:05:14.811 09:52:09 -- spdk/autotest.sh@89 -- # rm -f 00:05:14.811 09:52:09 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.811 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.811 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:14.811 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:14.811 09:52:10 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:14.811 09:52:10 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:14.811 09:52:10 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:14.811 09:52:10 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:14.811 09:52:10 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:14.811 09:52:10 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:14.811 09:52:10 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:14.811 09:52:10 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:14.811 09:52:10 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:14.811 09:52:10 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:14.811 09:52:10 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:14.811 09:52:10 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:14.811 09:52:10 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:14.811 09:52:10 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:14.811 09:52:10 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:14.811 09:52:10 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:14.811 09:52:10 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:14.811 09:52:10 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:14.811 09:52:10 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:14.811 09:52:10 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:14.811 09:52:10 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:14.811 09:52:10 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:14.811 09:52:10 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:14.811 09:52:10 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:14.811 09:52:10 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:14.811 09:52:10 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:14.811 09:52:10 -- spdk/autotest.sh@108 -- # grep -v p 00:05:14.811 09:52:10 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:14.811 09:52:10 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:14.811 09:52:10 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:14.811 09:52:10 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:14.811 09:52:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:14.811 No valid GPT data, bailing 00:05:14.811 09:52:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:14.811 09:52:10 -- scripts/common.sh@393 -- # pt= 00:05:14.811 09:52:10 -- scripts/common.sh@394 -- # return 1 00:05:14.811 09:52:10 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:14.811 1+0 records in 00:05:14.811 1+0 records out 00:05:14.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00392063 s, 267 MB/s 00:05:14.811 09:52:10 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:14.811 09:52:10 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:14.811 09:52:10 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:05:14.811 09:52:10 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:14.811 09:52:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:14.811 No valid GPT data, bailing 00:05:14.811 09:52:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:14.811 09:52:10 -- scripts/common.sh@393 -- # pt= 00:05:14.811 09:52:10 -- scripts/common.sh@394 -- # return 1 00:05:14.811 09:52:10 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:14.811 1+0 records in 00:05:14.811 1+0 records out 00:05:14.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00394462 s, 266 MB/s 00:05:14.811 09:52:10 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:14.811 09:52:10 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:14.811 09:52:10 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:05:14.811 09:52:10 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:14.811 09:52:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:14.811 No valid GPT data, bailing 00:05:14.811 09:52:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:14.811 09:52:10 -- scripts/common.sh@393 -- # pt= 00:05:14.811 09:52:10 -- scripts/common.sh@394 -- # return 1 00:05:14.811 09:52:10 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:14.811 1+0 records in 00:05:14.811 1+0 records out 00:05:14.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469978 s, 223 MB/s 00:05:14.811 09:52:10 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:14.811 09:52:10 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:14.811 09:52:10 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:05:14.811 09:52:10 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:14.811 09:52:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:14.811 No valid GPT data, bailing 00:05:14.811 09:52:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:14.811 09:52:10 -- scripts/common.sh@393 -- # pt= 00:05:14.811 09:52:10 -- scripts/common.sh@394 -- # return 1 00:05:14.811 09:52:10 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:14.811 1+0 records in 00:05:14.811 1+0 records out 00:05:14.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044895 s, 234 MB/s 00:05:14.811 09:52:10 -- spdk/autotest.sh@116 -- # sync 00:05:14.812 09:52:10 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:14.812 09:52:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:14.812 09:52:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:14.812 09:52:12 -- spdk/autotest.sh@122 -- # uname -s 00:05:14.812 09:52:12 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:14.812 09:52:12 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:14.812 09:52:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.812 09:52:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.812 09:52:12 -- common/autotest_common.sh@10 -- # set +x 00:05:14.812 ************************************ 00:05:14.812 START TEST setup.sh 00:05:14.812 ************************************ 00:05:14.812 09:52:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:14.812 * Looking for test storage... 00:05:14.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:14.812 09:52:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:14.812 09:52:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:14.812 09:52:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:14.812 09:52:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:14.812 09:52:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:14.812 09:52:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:14.812 09:52:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:14.812 09:52:13 -- scripts/common.sh@335 -- # IFS=.-: 00:05:14.812 09:52:13 -- scripts/common.sh@335 -- # read -ra ver1 00:05:14.812 09:52:13 -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.812 09:52:13 -- scripts/common.sh@336 -- # read -ra ver2 00:05:14.812 09:52:13 -- scripts/common.sh@337 -- # local 'op=<' 00:05:14.812 09:52:13 -- scripts/common.sh@339 -- # ver1_l=2 00:05:14.812 09:52:13 -- scripts/common.sh@340 -- # ver2_l=1 00:05:14.812 09:52:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:14.812 09:52:13 -- scripts/common.sh@343 -- # case "$op" in 00:05:14.812 09:52:13 -- scripts/common.sh@344 -- # : 1 00:05:14.812 09:52:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:14.812 09:52:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.812 09:52:13 -- scripts/common.sh@364 -- # decimal 1 00:05:14.812 09:52:13 -- scripts/common.sh@352 -- # local d=1 00:05:14.812 09:52:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.812 09:52:13 -- scripts/common.sh@354 -- # echo 1 00:05:14.812 09:52:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:14.812 09:52:13 -- scripts/common.sh@365 -- # decimal 2 00:05:14.812 09:52:13 -- scripts/common.sh@352 -- # local d=2 00:05:14.812 09:52:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.812 09:52:13 -- scripts/common.sh@354 -- # echo 2 00:05:14.812 09:52:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:14.812 09:52:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:14.812 09:52:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:14.812 09:52:13 -- scripts/common.sh@367 -- # return 0 00:05:14.812 09:52:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.812 09:52:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:14.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.812 --rc genhtml_branch_coverage=1 00:05:14.812 --rc genhtml_function_coverage=1 00:05:14.812 --rc genhtml_legend=1 00:05:14.812 --rc geninfo_all_blocks=1 00:05:14.812 --rc geninfo_unexecuted_blocks=1 00:05:14.812 00:05:14.812 ' 00:05:14.812 09:52:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:14.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.812 --rc genhtml_branch_coverage=1 00:05:14.812 --rc genhtml_function_coverage=1 00:05:14.812 --rc genhtml_legend=1 00:05:14.812 --rc geninfo_all_blocks=1 00:05:14.812 --rc geninfo_unexecuted_blocks=1 00:05:14.812 00:05:14.812 ' 00:05:14.812 09:52:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:14.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.812 --rc genhtml_branch_coverage=1 00:05:14.812 --rc genhtml_function_coverage=1 00:05:14.812 --rc genhtml_legend=1 00:05:14.812 --rc geninfo_all_blocks=1 00:05:14.812 --rc geninfo_unexecuted_blocks=1 00:05:14.812 00:05:14.812 ' 00:05:14.812 09:52:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:14.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.812 --rc genhtml_branch_coverage=1 00:05:14.812 --rc genhtml_function_coverage=1 00:05:14.812 --rc genhtml_legend=1 00:05:14.812 --rc geninfo_all_blocks=1 00:05:14.812 --rc geninfo_unexecuted_blocks=1 00:05:14.812 00:05:14.812 ' 00:05:14.812 09:52:13 -- setup/test-setup.sh@10 -- # uname -s 00:05:14.812 09:52:13 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:14.812 09:52:13 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:14.812 09:52:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.812 09:52:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.812 09:52:13 -- common/autotest_common.sh@10 -- # set +x 00:05:14.812 ************************************ 00:05:14.812 START TEST acl 00:05:14.812 ************************************ 00:05:14.812 09:52:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:14.812 * Looking for test storage... 00:05:14.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:14.812 09:52:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:14.812 09:52:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:14.812 09:52:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:14.812 09:52:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:14.812 09:52:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:14.812 09:52:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:14.812 09:52:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:14.812 09:52:13 -- scripts/common.sh@335 -- # IFS=.-: 00:05:14.812 09:52:13 -- scripts/common.sh@335 -- # read -ra ver1 00:05:14.812 09:52:13 -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.812 09:52:13 -- scripts/common.sh@336 -- # read -ra ver2 00:05:14.812 09:52:13 -- scripts/common.sh@337 -- # local 'op=<' 00:05:14.812 09:52:13 -- scripts/common.sh@339 -- # ver1_l=2 00:05:14.812 09:52:13 -- scripts/common.sh@340 -- # ver2_l=1 00:05:14.812 09:52:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:14.812 09:52:13 -- scripts/common.sh@343 -- # case "$op" in 00:05:14.812 09:52:13 -- scripts/common.sh@344 -- # : 1 00:05:14.812 09:52:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:14.812 09:52:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.812 09:52:13 -- scripts/common.sh@364 -- # decimal 1 00:05:14.812 09:52:13 -- scripts/common.sh@352 -- # local d=1 00:05:14.812 09:52:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.812 09:52:13 -- scripts/common.sh@354 -- # echo 1 00:05:14.812 09:52:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:14.812 09:52:13 -- scripts/common.sh@365 -- # decimal 2 00:05:14.812 09:52:13 -- scripts/common.sh@352 -- # local d=2 00:05:14.812 09:52:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.812 09:52:13 -- scripts/common.sh@354 -- # echo 2 00:05:14.812 09:52:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:14.812 09:52:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:14.812 09:52:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:14.812 09:52:13 -- scripts/common.sh@367 -- # return 0 00:05:14.812 09:52:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.812 09:52:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:14.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.812 --rc genhtml_branch_coverage=1 00:05:14.812 --rc genhtml_function_coverage=1 00:05:14.812 --rc genhtml_legend=1 00:05:14.812 --rc geninfo_all_blocks=1 00:05:14.812 --rc geninfo_unexecuted_blocks=1 00:05:14.812 00:05:14.812 ' 00:05:14.812 09:52:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:14.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.812 --rc genhtml_branch_coverage=1 00:05:14.812 --rc genhtml_function_coverage=1 00:05:14.812 --rc genhtml_legend=1 00:05:14.812 --rc geninfo_all_blocks=1 00:05:14.812 --rc geninfo_unexecuted_blocks=1 00:05:14.812 00:05:14.812 ' 00:05:14.812 09:52:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:14.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.812 --rc genhtml_branch_coverage=1 00:05:14.812 --rc genhtml_function_coverage=1 00:05:14.812 --rc genhtml_legend=1 00:05:14.812 --rc geninfo_all_blocks=1 00:05:14.812 --rc geninfo_unexecuted_blocks=1 00:05:14.812 00:05:14.812 ' 00:05:14.812 09:52:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:14.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.813 --rc genhtml_branch_coverage=1 00:05:14.813 --rc genhtml_function_coverage=1 00:05:14.813 --rc genhtml_legend=1 00:05:14.813 --rc geninfo_all_blocks=1 00:05:14.813 --rc geninfo_unexecuted_blocks=1 00:05:14.813 00:05:14.813 ' 00:05:14.813 09:52:13 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:14.813 09:52:13 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:14.813 09:52:13 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:14.813 09:52:13 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:14.813 09:52:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:14.813 09:52:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:14.813 09:52:13 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:14.813 09:52:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:14.813 09:52:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:14.813 09:52:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:14.813 09:52:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:14.813 09:52:13 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:14.813 09:52:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:14.813 09:52:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:14.813 09:52:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:14.813 09:52:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:14.813 09:52:13 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:14.813 09:52:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:14.813 09:52:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:14.813 09:52:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:14.813 09:52:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:14.813 09:52:13 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:14.813 09:52:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:14.813 09:52:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:14.813 09:52:13 -- setup/acl.sh@12 -- # devs=() 00:05:14.813 09:52:13 -- setup/acl.sh@12 -- # declare -a devs 00:05:14.813 09:52:13 -- setup/acl.sh@13 -- # drivers=() 00:05:14.813 09:52:13 -- setup/acl.sh@13 -- # declare -A drivers 00:05:14.813 09:52:13 -- setup/acl.sh@51 -- # setup reset 00:05:14.813 09:52:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.813 09:52:13 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.754 09:52:14 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:15.754 09:52:14 -- setup/acl.sh@16 -- # local dev driver 00:05:15.754 09:52:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:15.754 09:52:14 -- setup/acl.sh@15 -- # setup output status 00:05:15.754 09:52:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.754 09:52:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:15.754 Hugepages 00:05:15.754 node hugesize free / total 00:05:15.754 09:52:14 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:15.754 09:52:14 -- setup/acl.sh@19 -- # continue 00:05:15.754 09:52:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:15.754 00:05:15.754 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:15.754 09:52:14 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:15.754 09:52:14 -- setup/acl.sh@19 -- # continue 00:05:15.754 09:52:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:15.754 09:52:14 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:15.754 09:52:14 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:15.754 09:52:14 -- setup/acl.sh@20 -- # continue 00:05:15.754 09:52:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:16.013 09:52:14 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:16.013 09:52:14 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:16.013 09:52:14 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:16.013 09:52:14 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:16.013 09:52:14 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:16.013 09:52:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:16.013 09:52:14 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:16.013 09:52:14 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:16.013 09:52:14 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:16.013 09:52:14 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:16.013 09:52:14 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:16.013 09:52:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:16.013 09:52:14 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:16.013 09:52:14 -- setup/acl.sh@54 -- # run_test denied denied 00:05:16.013 09:52:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.013 09:52:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.013 09:52:14 -- common/autotest_common.sh@10 -- # set +x 00:05:16.013 ************************************ 00:05:16.013 START TEST denied 00:05:16.013 ************************************ 00:05:16.013 09:52:14 -- common/autotest_common.sh@1114 -- # denied 00:05:16.013 09:52:14 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:16.013 09:52:14 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:16.013 09:52:14 -- setup/acl.sh@38 -- # setup output config 00:05:16.013 09:52:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.013 09:52:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.950 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:16.950 09:52:15 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:16.950 09:52:15 -- setup/acl.sh@28 -- # local dev driver 00:05:16.950 09:52:15 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:16.950 09:52:15 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:16.950 09:52:15 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:16.950 09:52:15 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:16.950 09:52:15 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:16.950 09:52:15 -- setup/acl.sh@41 -- # setup reset 00:05:16.950 09:52:15 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.950 09:52:15 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.518 00:05:17.518 real 0m1.497s 00:05:17.518 user 0m0.603s 00:05:17.518 sys 0m0.847s 00:05:17.518 09:52:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.518 ************************************ 00:05:17.518 END TEST denied 00:05:17.518 ************************************ 00:05:17.518 09:52:15 -- common/autotest_common.sh@10 -- # set +x 00:05:17.518 09:52:16 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:17.518 09:52:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.518 09:52:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.518 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:05:17.518 ************************************ 00:05:17.518 START TEST allowed 00:05:17.518 ************************************ 00:05:17.518 09:52:16 -- common/autotest_common.sh@1114 -- # allowed 00:05:17.518 09:52:16 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:17.518 09:52:16 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:17.518 09:52:16 -- setup/acl.sh@45 -- # setup output config 00:05:17.518 09:52:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.518 09:52:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:18.456 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.456 09:52:16 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:18.456 09:52:16 -- setup/acl.sh@28 -- # local dev driver 00:05:18.456 09:52:16 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:18.456 09:52:16 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:18.456 09:52:16 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:18.456 09:52:16 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:18.456 09:52:16 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:18.456 09:52:16 -- setup/acl.sh@48 -- # setup reset 00:05:18.456 09:52:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:18.456 09:52:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:19.023 00:05:19.023 real 0m1.503s 00:05:19.023 user 0m0.693s 00:05:19.023 sys 0m0.805s 00:05:19.023 09:52:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.023 ************************************ 00:05:19.023 END TEST allowed 00:05:19.023 ************************************ 00:05:19.023 09:52:17 -- common/autotest_common.sh@10 -- # set +x 00:05:19.023 ************************************ 00:05:19.023 END TEST acl 00:05:19.023 ************************************ 00:05:19.023 00:05:19.023 real 0m4.430s 00:05:19.023 user 0m1.974s 00:05:19.023 sys 0m2.433s 00:05:19.023 09:52:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.023 09:52:17 -- common/autotest_common.sh@10 -- # set +x 00:05:19.023 09:52:17 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:19.023 09:52:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.023 09:52:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.023 09:52:17 -- common/autotest_common.sh@10 -- # set +x 00:05:19.023 ************************************ 00:05:19.023 START TEST hugepages 00:05:19.023 ************************************ 00:05:19.023 09:52:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:19.284 * Looking for test storage... 00:05:19.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:19.284 09:52:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:19.284 09:52:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:19.284 09:52:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:19.284 09:52:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:19.284 09:52:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:19.284 09:52:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:19.284 09:52:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:19.284 09:52:17 -- scripts/common.sh@335 -- # IFS=.-: 00:05:19.284 09:52:17 -- scripts/common.sh@335 -- # read -ra ver1 00:05:19.284 09:52:17 -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.284 09:52:17 -- scripts/common.sh@336 -- # read -ra ver2 00:05:19.284 09:52:17 -- scripts/common.sh@337 -- # local 'op=<' 00:05:19.284 09:52:17 -- scripts/common.sh@339 -- # ver1_l=2 00:05:19.284 09:52:17 -- scripts/common.sh@340 -- # ver2_l=1 00:05:19.284 09:52:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:19.284 09:52:17 -- scripts/common.sh@343 -- # case "$op" in 00:05:19.284 09:52:17 -- scripts/common.sh@344 -- # : 1 00:05:19.284 09:52:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:19.284 09:52:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.284 09:52:17 -- scripts/common.sh@364 -- # decimal 1 00:05:19.284 09:52:17 -- scripts/common.sh@352 -- # local d=1 00:05:19.284 09:52:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.284 09:52:17 -- scripts/common.sh@354 -- # echo 1 00:05:19.284 09:52:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:19.284 09:52:17 -- scripts/common.sh@365 -- # decimal 2 00:05:19.284 09:52:17 -- scripts/common.sh@352 -- # local d=2 00:05:19.284 09:52:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.284 09:52:17 -- scripts/common.sh@354 -- # echo 2 00:05:19.284 09:52:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:19.284 09:52:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:19.284 09:52:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:19.284 09:52:17 -- scripts/common.sh@367 -- # return 0 00:05:19.284 09:52:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.284 09:52:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:19.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.284 --rc genhtml_branch_coverage=1 00:05:19.284 --rc genhtml_function_coverage=1 00:05:19.284 --rc genhtml_legend=1 00:05:19.284 --rc geninfo_all_blocks=1 00:05:19.284 --rc geninfo_unexecuted_blocks=1 00:05:19.284 00:05:19.284 ' 00:05:19.284 09:52:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:19.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.284 --rc genhtml_branch_coverage=1 00:05:19.284 --rc genhtml_function_coverage=1 00:05:19.284 --rc genhtml_legend=1 00:05:19.284 --rc geninfo_all_blocks=1 00:05:19.284 --rc geninfo_unexecuted_blocks=1 00:05:19.284 00:05:19.284 ' 00:05:19.284 09:52:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:19.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.284 --rc genhtml_branch_coverage=1 00:05:19.284 --rc genhtml_function_coverage=1 00:05:19.284 --rc genhtml_legend=1 00:05:19.284 --rc geninfo_all_blocks=1 00:05:19.284 --rc geninfo_unexecuted_blocks=1 00:05:19.284 00:05:19.284 ' 00:05:19.284 09:52:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:19.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.284 --rc genhtml_branch_coverage=1 00:05:19.284 --rc genhtml_function_coverage=1 00:05:19.284 --rc genhtml_legend=1 00:05:19.284 --rc geninfo_all_blocks=1 00:05:19.284 --rc geninfo_unexecuted_blocks=1 00:05:19.284 00:05:19.284 ' 00:05:19.284 09:52:17 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:19.284 09:52:17 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:19.284 09:52:17 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:19.284 09:52:17 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:19.284 09:52:17 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:19.284 09:52:17 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:19.284 09:52:17 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:19.284 09:52:17 -- setup/common.sh@18 -- # local node= 00:05:19.284 09:52:17 -- setup/common.sh@19 -- # local var val 00:05:19.284 09:52:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.284 09:52:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.284 09:52:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.284 09:52:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.284 09:52:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.284 09:52:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.284 09:52:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 4405388 kB' 'MemAvailable: 7335580 kB' 'Buffers: 3704 kB' 'Cached: 3129828 kB' 'SwapCached: 0 kB' 'Active: 496516 kB' 'Inactive: 2753836 kB' 'Active(anon): 127332 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 118584 kB' 'Mapped: 51364 kB' 'Shmem: 10512 kB' 'KReclaimable: 88648 kB' 'Slab: 192104 kB' 'SReclaimable: 88648 kB' 'SUnreclaim: 103456 kB' 'KernelStack: 6744 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 320112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.284 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.284 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.285 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.285 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.286 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.286 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.286 09:52:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.286 09:52:17 -- setup/common.sh@32 -- # continue 00:05:19.286 09:52:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.286 09:52:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.286 09:52:17 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.286 09:52:17 -- setup/common.sh@33 -- # echo 2048 00:05:19.286 09:52:17 -- setup/common.sh@33 -- # return 0 00:05:19.286 09:52:17 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:19.286 09:52:17 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:19.286 09:52:17 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:19.286 09:52:17 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:19.286 09:52:17 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:19.286 09:52:17 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:19.286 09:52:17 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:19.286 09:52:17 -- setup/hugepages.sh@207 -- # get_nodes 00:05:19.286 09:52:17 -- setup/hugepages.sh@27 -- # local node 00:05:19.286 09:52:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.286 09:52:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:19.286 09:52:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.286 09:52:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.286 09:52:17 -- setup/hugepages.sh@208 -- # clear_hp 00:05:19.286 09:52:17 -- setup/hugepages.sh@37 -- # local node hp 00:05:19.286 09:52:17 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:19.286 09:52:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.286 09:52:17 -- setup/hugepages.sh@41 -- # echo 0 00:05:19.286 09:52:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.286 09:52:17 -- setup/hugepages.sh@41 -- # echo 0 00:05:19.286 09:52:17 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:19.286 09:52:17 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:19.286 09:52:17 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:19.286 09:52:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.286 09:52:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.286 09:52:17 -- common/autotest_common.sh@10 -- # set +x 00:05:19.286 ************************************ 00:05:19.286 START TEST default_setup 00:05:19.286 ************************************ 00:05:19.286 09:52:17 -- common/autotest_common.sh@1114 -- # default_setup 00:05:19.286 09:52:17 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:19.286 09:52:17 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:19.286 09:52:17 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:19.286 09:52:17 -- setup/hugepages.sh@51 -- # shift 00:05:19.286 09:52:17 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:19.286 09:52:17 -- setup/hugepages.sh@52 -- # local node_ids 00:05:19.286 09:52:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:19.286 09:52:17 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:19.286 09:52:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:19.286 09:52:17 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:19.286 09:52:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:19.286 09:52:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:19.286 09:52:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:19.286 09:52:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:19.286 09:52:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:19.286 09:52:17 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:19.286 09:52:17 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:19.286 09:52:17 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:19.286 09:52:17 -- setup/hugepages.sh@73 -- # return 0 00:05:19.286 09:52:17 -- setup/hugepages.sh@137 -- # setup output 00:05:19.286 09:52:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.286 09:52:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.226 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.226 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:20.226 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:20.226 09:52:18 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:20.226 09:52:18 -- setup/hugepages.sh@89 -- # local node 00:05:20.226 09:52:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.226 09:52:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.226 09:52:18 -- setup/hugepages.sh@92 -- # local surp 00:05:20.226 09:52:18 -- setup/hugepages.sh@93 -- # local resv 00:05:20.226 09:52:18 -- setup/hugepages.sh@94 -- # local anon 00:05:20.226 09:52:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.226 09:52:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.226 09:52:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.226 09:52:18 -- setup/common.sh@18 -- # local node= 00:05:20.226 09:52:18 -- setup/common.sh@19 -- # local var val 00:05:20.226 09:52:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.226 09:52:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.226 09:52:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.226 09:52:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.226 09:52:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.226 09:52:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6529912 kB' 'MemAvailable: 9460000 kB' 'Buffers: 3704 kB' 'Cached: 3129816 kB' 'SwapCached: 0 kB' 'Active: 497800 kB' 'Inactive: 2753840 kB' 'Active(anon): 128616 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119980 kB' 'Mapped: 51148 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191872 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103444 kB' 'KernelStack: 6688 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.226 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.226 09:52:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.227 09:52:18 -- setup/common.sh@33 -- # echo 0 00:05:20.227 09:52:18 -- setup/common.sh@33 -- # return 0 00:05:20.227 09:52:18 -- setup/hugepages.sh@97 -- # anon=0 00:05:20.227 09:52:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.227 09:52:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.227 09:52:18 -- setup/common.sh@18 -- # local node= 00:05:20.227 09:52:18 -- setup/common.sh@19 -- # local var val 00:05:20.227 09:52:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.227 09:52:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.227 09:52:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.227 09:52:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.227 09:52:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.227 09:52:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6529912 kB' 'MemAvailable: 9460000 kB' 'Buffers: 3704 kB' 'Cached: 3129816 kB' 'SwapCached: 0 kB' 'Active: 497824 kB' 'Inactive: 2753840 kB' 'Active(anon): 128640 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119752 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191876 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103448 kB' 'KernelStack: 6688 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.227 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.227 09:52:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.228 09:52:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.228 09:52:18 -- setup/common.sh@33 -- # echo 0 00:05:20.228 09:52:18 -- setup/common.sh@33 -- # return 0 00:05:20.228 09:52:18 -- setup/hugepages.sh@99 -- # surp=0 00:05:20.228 09:52:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.228 09:52:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.228 09:52:18 -- setup/common.sh@18 -- # local node= 00:05:20.228 09:52:18 -- setup/common.sh@19 -- # local var val 00:05:20.228 09:52:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.228 09:52:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.228 09:52:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.228 09:52:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.228 09:52:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.228 09:52:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.228 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6529912 kB' 'MemAvailable: 9460000 kB' 'Buffers: 3704 kB' 'Cached: 3129816 kB' 'SwapCached: 0 kB' 'Active: 497828 kB' 'Inactive: 2753840 kB' 'Active(anon): 128644 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119736 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191872 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103444 kB' 'KernelStack: 6688 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.229 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.229 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.230 09:52:18 -- setup/common.sh@33 -- # echo 0 00:05:20.230 09:52:18 -- setup/common.sh@33 -- # return 0 00:05:20.230 nr_hugepages=1024 00:05:20.230 resv_hugepages=0 00:05:20.230 surplus_hugepages=0 00:05:20.230 anon_hugepages=0 00:05:20.230 09:52:18 -- setup/hugepages.sh@100 -- # resv=0 00:05:20.230 09:52:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:20.230 09:52:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.230 09:52:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.230 09:52:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.230 09:52:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.230 09:52:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:20.230 09:52:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.230 09:52:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.230 09:52:18 -- setup/common.sh@18 -- # local node= 00:05:20.230 09:52:18 -- setup/common.sh@19 -- # local var val 00:05:20.230 09:52:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.230 09:52:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.230 09:52:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.230 09:52:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.230 09:52:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.230 09:52:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6529912 kB' 'MemAvailable: 9460000 kB' 'Buffers: 3704 kB' 'Cached: 3129816 kB' 'SwapCached: 0 kB' 'Active: 497796 kB' 'Inactive: 2753840 kB' 'Active(anon): 128612 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119752 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191872 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103444 kB' 'KernelStack: 6688 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.230 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.230 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.491 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.491 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.492 09:52:18 -- setup/common.sh@33 -- # echo 1024 00:05:20.492 09:52:18 -- setup/common.sh@33 -- # return 0 00:05:20.492 09:52:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.492 09:52:18 -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.492 09:52:18 -- setup/hugepages.sh@27 -- # local node 00:05:20.492 09:52:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.492 09:52:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:20.492 09:52:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.492 09:52:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.492 09:52:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.492 09:52:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.492 09:52:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.492 09:52:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.492 09:52:18 -- setup/common.sh@18 -- # local node=0 00:05:20.492 09:52:18 -- setup/common.sh@19 -- # local var val 00:05:20.492 09:52:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.492 09:52:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.492 09:52:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.492 09:52:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.492 09:52:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.492 09:52:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.492 09:52:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6529912 kB' 'MemUsed: 5709200 kB' 'SwapCached: 0 kB' 'Active: 498288 kB' 'Inactive: 2753840 kB' 'Active(anon): 129104 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'FilePages: 3133520 kB' 'Mapped: 51300 kB' 'AnonPages: 120228 kB' 'Shmem: 10488 kB' 'KernelStack: 6704 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88428 kB' 'Slab: 191876 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.492 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.492 09:52:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.493 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.493 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.493 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.493 09:52:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.493 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.493 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.493 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.493 09:52:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.493 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.493 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.493 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.493 09:52:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.493 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.493 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.493 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.493 09:52:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.493 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.493 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.493 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.493 09:52:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.493 09:52:18 -- setup/common.sh@32 -- # continue 00:05:20.493 09:52:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.493 09:52:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.493 09:52:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.493 09:52:18 -- setup/common.sh@33 -- # echo 0 00:05:20.493 09:52:18 -- setup/common.sh@33 -- # return 0 00:05:20.493 09:52:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.493 09:52:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.493 09:52:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.493 09:52:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.493 node0=1024 expecting 1024 00:05:20.493 09:52:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:20.493 09:52:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:20.493 ************************************ 00:05:20.493 END TEST default_setup 00:05:20.493 ************************************ 00:05:20.493 00:05:20.493 real 0m1.030s 00:05:20.493 user 0m0.457s 00:05:20.493 sys 0m0.480s 00:05:20.493 09:52:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.493 09:52:18 -- common/autotest_common.sh@10 -- # set +x 00:05:20.493 09:52:18 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:20.493 09:52:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.493 09:52:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.493 09:52:18 -- common/autotest_common.sh@10 -- # set +x 00:05:20.493 ************************************ 00:05:20.493 START TEST per_node_1G_alloc 00:05:20.493 ************************************ 00:05:20.493 09:52:18 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:20.493 09:52:18 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:20.493 09:52:18 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:20.493 09:52:18 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:20.493 09:52:18 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:20.493 09:52:18 -- setup/hugepages.sh@51 -- # shift 00:05:20.493 09:52:18 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:20.493 09:52:18 -- setup/hugepages.sh@52 -- # local node_ids 00:05:20.493 09:52:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.493 09:52:18 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:20.493 09:52:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:20.493 09:52:18 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:20.493 09:52:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.493 09:52:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:20.493 09:52:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.493 09:52:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.493 09:52:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.493 09:52:18 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:20.493 09:52:18 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:20.493 09:52:18 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:20.493 09:52:18 -- setup/hugepages.sh@73 -- # return 0 00:05:20.493 09:52:18 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:20.493 09:52:18 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:20.493 09:52:18 -- setup/hugepages.sh@146 -- # setup output 00:05:20.493 09:52:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.493 09:52:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.753 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.753 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.753 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.753 09:52:19 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:20.753 09:52:19 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:20.753 09:52:19 -- setup/hugepages.sh@89 -- # local node 00:05:20.753 09:52:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.753 09:52:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.753 09:52:19 -- setup/hugepages.sh@92 -- # local surp 00:05:20.753 09:52:19 -- setup/hugepages.sh@93 -- # local resv 00:05:20.753 09:52:19 -- setup/hugepages.sh@94 -- # local anon 00:05:20.753 09:52:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.753 09:52:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.753 09:52:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.753 09:52:19 -- setup/common.sh@18 -- # local node= 00:05:20.753 09:52:19 -- setup/common.sh@19 -- # local var val 00:05:20.753 09:52:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.753 09:52:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.753 09:52:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.753 09:52:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.753 09:52:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.753 09:52:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7582248 kB' 'MemAvailable: 10512344 kB' 'Buffers: 3704 kB' 'Cached: 3129816 kB' 'SwapCached: 0 kB' 'Active: 498072 kB' 'Inactive: 2753848 kB' 'Active(anon): 128888 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 120008 kB' 'Mapped: 51420 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191868 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103440 kB' 'KernelStack: 6676 kB' 'PageTables: 4684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.753 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.753 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.754 09:52:19 -- setup/common.sh@33 -- # echo 0 00:05:20.754 09:52:19 -- setup/common.sh@33 -- # return 0 00:05:20.754 09:52:19 -- setup/hugepages.sh@97 -- # anon=0 00:05:20.754 09:52:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.754 09:52:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.754 09:52:19 -- setup/common.sh@18 -- # local node= 00:05:20.754 09:52:19 -- setup/common.sh@19 -- # local var val 00:05:20.754 09:52:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.754 09:52:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.754 09:52:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.754 09:52:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.754 09:52:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.754 09:52:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7582248 kB' 'MemAvailable: 10512344 kB' 'Buffers: 3704 kB' 'Cached: 3129816 kB' 'SwapCached: 0 kB' 'Active: 497756 kB' 'Inactive: 2753848 kB' 'Active(anon): 128572 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119692 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191864 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103436 kB' 'KernelStack: 6672 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # continue 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.754 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.754 09:52:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.016 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.016 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.017 09:52:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.017 09:52:19 -- setup/common.sh@33 -- # echo 0 00:05:21.017 09:52:19 -- setup/common.sh@33 -- # return 0 00:05:21.017 09:52:19 -- setup/hugepages.sh@99 -- # surp=0 00:05:21.017 09:52:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.017 09:52:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.017 09:52:19 -- setup/common.sh@18 -- # local node= 00:05:21.017 09:52:19 -- setup/common.sh@19 -- # local var val 00:05:21.017 09:52:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.017 09:52:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.017 09:52:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.017 09:52:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.017 09:52:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.017 09:52:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.017 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7582248 kB' 'MemAvailable: 10512344 kB' 'Buffers: 3704 kB' 'Cached: 3129816 kB' 'SwapCached: 0 kB' 'Active: 497748 kB' 'Inactive: 2753848 kB' 'Active(anon): 128564 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119668 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191860 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103432 kB' 'KernelStack: 6672 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.018 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.018 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.019 09:52:19 -- setup/common.sh@33 -- # echo 0 00:05:21.019 09:52:19 -- setup/common.sh@33 -- # return 0 00:05:21.019 09:52:19 -- setup/hugepages.sh@100 -- # resv=0 00:05:21.019 09:52:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:21.019 nr_hugepages=512 00:05:21.019 resv_hugepages=0 00:05:21.019 surplus_hugepages=0 00:05:21.019 anon_hugepages=0 00:05:21.019 09:52:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.019 09:52:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.019 09:52:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.019 09:52:19 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:21.019 09:52:19 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:21.019 09:52:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.019 09:52:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.019 09:52:19 -- setup/common.sh@18 -- # local node= 00:05:21.019 09:52:19 -- setup/common.sh@19 -- # local var val 00:05:21.019 09:52:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.019 09:52:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.019 09:52:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.019 09:52:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.019 09:52:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.019 09:52:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7582868 kB' 'MemAvailable: 10512964 kB' 'Buffers: 3704 kB' 'Cached: 3129816 kB' 'SwapCached: 0 kB' 'Active: 497812 kB' 'Inactive: 2753848 kB' 'Active(anon): 128628 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119772 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191860 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103432 kB' 'KernelStack: 6688 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.019 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.019 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.020 09:52:19 -- setup/common.sh@33 -- # echo 512 00:05:21.020 09:52:19 -- setup/common.sh@33 -- # return 0 00:05:21.020 09:52:19 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:21.020 09:52:19 -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.020 09:52:19 -- setup/hugepages.sh@27 -- # local node 00:05:21.020 09:52:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.020 09:52:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:21.020 09:52:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.020 09:52:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.020 09:52:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.020 09:52:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.020 09:52:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.020 09:52:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.020 09:52:19 -- setup/common.sh@18 -- # local node=0 00:05:21.020 09:52:19 -- setup/common.sh@19 -- # local var val 00:05:21.020 09:52:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.020 09:52:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.020 09:52:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.020 09:52:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.020 09:52:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.020 09:52:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7582868 kB' 'MemUsed: 4656244 kB' 'SwapCached: 0 kB' 'Active: 497560 kB' 'Inactive: 2753848 kB' 'Active(anon): 128376 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'FilePages: 3133520 kB' 'Mapped: 51040 kB' 'AnonPages: 119772 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88428 kB' 'Slab: 191856 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.020 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.020 09:52:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.021 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.021 09:52:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.021 09:52:19 -- setup/common.sh@33 -- # echo 0 00:05:21.021 09:52:19 -- setup/common.sh@33 -- # return 0 00:05:21.021 09:52:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.021 node0=512 expecting 512 00:05:21.021 09:52:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.021 09:52:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.021 09:52:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.021 09:52:19 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:21.021 ************************************ 00:05:21.021 END TEST per_node_1G_alloc 00:05:21.021 ************************************ 00:05:21.021 09:52:19 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:21.021 00:05:21.021 real 0m0.575s 00:05:21.021 user 0m0.283s 00:05:21.021 sys 0m0.283s 00:05:21.021 09:52:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.021 09:52:19 -- common/autotest_common.sh@10 -- # set +x 00:05:21.021 09:52:19 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:21.021 09:52:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.021 09:52:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.021 09:52:19 -- common/autotest_common.sh@10 -- # set +x 00:05:21.021 ************************************ 00:05:21.021 START TEST even_2G_alloc 00:05:21.021 ************************************ 00:05:21.021 09:52:19 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:21.021 09:52:19 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:21.021 09:52:19 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:21.022 09:52:19 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:21.022 09:52:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:21.022 09:52:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:21.022 09:52:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:21.022 09:52:19 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:21.022 09:52:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.022 09:52:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:21.022 09:52:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:21.022 09:52:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.022 09:52:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.022 09:52:19 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:21.022 09:52:19 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:21.022 09:52:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.022 09:52:19 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:21.022 09:52:19 -- setup/hugepages.sh@83 -- # : 0 00:05:21.022 09:52:19 -- setup/hugepages.sh@84 -- # : 0 00:05:21.022 09:52:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.022 09:52:19 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:21.022 09:52:19 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:21.022 09:52:19 -- setup/hugepages.sh@153 -- # setup output 00:05:21.022 09:52:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.022 09:52:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.542 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.542 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.542 09:52:19 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:21.542 09:52:19 -- setup/hugepages.sh@89 -- # local node 00:05:21.542 09:52:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.542 09:52:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.542 09:52:19 -- setup/hugepages.sh@92 -- # local surp 00:05:21.542 09:52:19 -- setup/hugepages.sh@93 -- # local resv 00:05:21.542 09:52:19 -- setup/hugepages.sh@94 -- # local anon 00:05:21.542 09:52:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.542 09:52:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.542 09:52:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.542 09:52:19 -- setup/common.sh@18 -- # local node= 00:05:21.542 09:52:19 -- setup/common.sh@19 -- # local var val 00:05:21.542 09:52:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.542 09:52:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.542 09:52:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.542 09:52:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.542 09:52:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.542 09:52:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.542 09:52:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6534344 kB' 'MemAvailable: 9464440 kB' 'Buffers: 3704 kB' 'Cached: 3129816 kB' 'SwapCached: 0 kB' 'Active: 498168 kB' 'Inactive: 2753848 kB' 'Active(anon): 128984 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 120180 kB' 'Mapped: 51168 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191836 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103408 kB' 'KernelStack: 6664 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.542 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.542 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.543 09:52:19 -- setup/common.sh@33 -- # echo 0 00:05:21.543 09:52:19 -- setup/common.sh@33 -- # return 0 00:05:21.543 09:52:19 -- setup/hugepages.sh@97 -- # anon=0 00:05:21.543 09:52:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.543 09:52:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.543 09:52:19 -- setup/common.sh@18 -- # local node= 00:05:21.543 09:52:19 -- setup/common.sh@19 -- # local var val 00:05:21.543 09:52:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.543 09:52:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.543 09:52:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.543 09:52:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.543 09:52:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.543 09:52:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6534344 kB' 'MemAvailable: 9464440 kB' 'Buffers: 3704 kB' 'Cached: 3129816 kB' 'SwapCached: 0 kB' 'Active: 497724 kB' 'Inactive: 2753848 kB' 'Active(anon): 128540 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119916 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191840 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103412 kB' 'KernelStack: 6696 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.543 09:52:19 -- setup/common.sh@32 -- # continue 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.543 09:52:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.544 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.544 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.545 09:52:20 -- setup/common.sh@33 -- # echo 0 00:05:21.545 09:52:20 -- setup/common.sh@33 -- # return 0 00:05:21.545 09:52:20 -- setup/hugepages.sh@99 -- # surp=0 00:05:21.545 09:52:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.545 09:52:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.545 09:52:20 -- setup/common.sh@18 -- # local node= 00:05:21.545 09:52:20 -- setup/common.sh@19 -- # local var val 00:05:21.545 09:52:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.545 09:52:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.545 09:52:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.545 09:52:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.545 09:52:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.545 09:52:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6534092 kB' 'MemAvailable: 9464188 kB' 'Buffers: 3704 kB' 'Cached: 3129816 kB' 'SwapCached: 0 kB' 'Active: 497576 kB' 'Inactive: 2753848 kB' 'Active(anon): 128392 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119776 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191844 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103416 kB' 'KernelStack: 6688 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.545 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.545 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.546 09:52:20 -- setup/common.sh@33 -- # echo 0 00:05:21.546 09:52:20 -- setup/common.sh@33 -- # return 0 00:05:21.546 09:52:20 -- setup/hugepages.sh@100 -- # resv=0 00:05:21.546 09:52:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:21.546 nr_hugepages=1024 00:05:21.546 resv_hugepages=0 00:05:21.546 09:52:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.546 surplus_hugepages=0 00:05:21.546 anon_hugepages=0 00:05:21.546 09:52:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.546 09:52:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.546 09:52:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.546 09:52:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:21.546 09:52:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.546 09:52:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.546 09:52:20 -- setup/common.sh@18 -- # local node= 00:05:21.546 09:52:20 -- setup/common.sh@19 -- # local var val 00:05:21.546 09:52:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.546 09:52:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.546 09:52:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.546 09:52:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.546 09:52:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.546 09:52:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6534092 kB' 'MemAvailable: 9464188 kB' 'Buffers: 3704 kB' 'Cached: 3129816 kB' 'SwapCached: 0 kB' 'Active: 497792 kB' 'Inactive: 2753848 kB' 'Active(anon): 128608 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119692 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191836 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103408 kB' 'KernelStack: 6672 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.546 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.546 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.547 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.547 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.548 09:52:20 -- setup/common.sh@33 -- # echo 1024 00:05:21.548 09:52:20 -- setup/common.sh@33 -- # return 0 00:05:21.548 09:52:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.548 09:52:20 -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.548 09:52:20 -- setup/hugepages.sh@27 -- # local node 00:05:21.548 09:52:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.548 09:52:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:21.548 09:52:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.548 09:52:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.548 09:52:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.548 09:52:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.548 09:52:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.548 09:52:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.548 09:52:20 -- setup/common.sh@18 -- # local node=0 00:05:21.548 09:52:20 -- setup/common.sh@19 -- # local var val 00:05:21.548 09:52:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.548 09:52:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.548 09:52:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.548 09:52:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.548 09:52:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.548 09:52:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6534092 kB' 'MemUsed: 5705020 kB' 'SwapCached: 0 kB' 'Active: 497608 kB' 'Inactive: 2753848 kB' 'Active(anon): 128424 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'FilePages: 3133520 kB' 'Mapped: 51040 kB' 'AnonPages: 119508 kB' 'Shmem: 10488 kB' 'KernelStack: 6656 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88428 kB' 'Slab: 191836 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.548 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.548 09:52:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.549 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.549 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.549 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.549 09:52:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.549 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.549 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.549 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.549 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.549 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.549 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.549 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.549 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.549 09:52:20 -- setup/common.sh@32 -- # continue 00:05:21.549 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.549 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.549 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.549 09:52:20 -- setup/common.sh@33 -- # echo 0 00:05:21.549 09:52:20 -- setup/common.sh@33 -- # return 0 00:05:21.549 09:52:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.549 09:52:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.549 09:52:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.549 09:52:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.549 node0=1024 expecting 1024 00:05:21.549 09:52:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:21.549 09:52:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:21.549 00:05:21.549 real 0m0.530s 00:05:21.549 user 0m0.254s 00:05:21.549 sys 0m0.297s 00:05:21.549 09:52:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.549 09:52:20 -- common/autotest_common.sh@10 -- # set +x 00:05:21.549 ************************************ 00:05:21.549 END TEST even_2G_alloc 00:05:21.549 ************************************ 00:05:21.549 09:52:20 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:21.549 09:52:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.549 09:52:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.549 09:52:20 -- common/autotest_common.sh@10 -- # set +x 00:05:21.808 ************************************ 00:05:21.808 START TEST odd_alloc 00:05:21.808 ************************************ 00:05:21.808 09:52:20 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:21.808 09:52:20 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:21.808 09:52:20 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:21.808 09:52:20 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:21.808 09:52:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:21.808 09:52:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:21.808 09:52:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:21.808 09:52:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:21.808 09:52:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.808 09:52:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:21.808 09:52:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:21.808 09:52:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.808 09:52:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.808 09:52:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:21.808 09:52:20 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:21.808 09:52:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.808 09:52:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:21.808 09:52:20 -- setup/hugepages.sh@83 -- # : 0 00:05:21.808 09:52:20 -- setup/hugepages.sh@84 -- # : 0 00:05:21.808 09:52:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.808 09:52:20 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:21.808 09:52:20 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:21.808 09:52:20 -- setup/hugepages.sh@160 -- # setup output 00:05:21.808 09:52:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.808 09:52:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.070 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.070 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.070 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.070 09:52:20 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:22.070 09:52:20 -- setup/hugepages.sh@89 -- # local node 00:05:22.070 09:52:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:22.070 09:52:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:22.070 09:52:20 -- setup/hugepages.sh@92 -- # local surp 00:05:22.070 09:52:20 -- setup/hugepages.sh@93 -- # local resv 00:05:22.070 09:52:20 -- setup/hugepages.sh@94 -- # local anon 00:05:22.070 09:52:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:22.070 09:52:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:22.070 09:52:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:22.070 09:52:20 -- setup/common.sh@18 -- # local node= 00:05:22.070 09:52:20 -- setup/common.sh@19 -- # local var val 00:05:22.070 09:52:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.070 09:52:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.070 09:52:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.070 09:52:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.070 09:52:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.070 09:52:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.070 09:52:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6531624 kB' 'MemAvailable: 9461724 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 497876 kB' 'Inactive: 2753852 kB' 'Active(anon): 128692 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120084 kB' 'Mapped: 51164 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191836 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103408 kB' 'KernelStack: 6680 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.070 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.070 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.071 09:52:20 -- setup/common.sh@33 -- # echo 0 00:05:22.071 09:52:20 -- setup/common.sh@33 -- # return 0 00:05:22.071 09:52:20 -- setup/hugepages.sh@97 -- # anon=0 00:05:22.071 09:52:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:22.071 09:52:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.071 09:52:20 -- setup/common.sh@18 -- # local node= 00:05:22.071 09:52:20 -- setup/common.sh@19 -- # local var val 00:05:22.071 09:52:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.071 09:52:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.071 09:52:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.071 09:52:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.071 09:52:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.071 09:52:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6532288 kB' 'MemAvailable: 9462388 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 497636 kB' 'Inactive: 2753852 kB' 'Active(anon): 128452 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119820 kB' 'Mapped: 51164 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191832 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103404 kB' 'KernelStack: 6648 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.071 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.071 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.072 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.072 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.073 09:52:20 -- setup/common.sh@33 -- # echo 0 00:05:22.073 09:52:20 -- setup/common.sh@33 -- # return 0 00:05:22.073 09:52:20 -- setup/hugepages.sh@99 -- # surp=0 00:05:22.073 09:52:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:22.073 09:52:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:22.073 09:52:20 -- setup/common.sh@18 -- # local node= 00:05:22.073 09:52:20 -- setup/common.sh@19 -- # local var val 00:05:22.073 09:52:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.073 09:52:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.073 09:52:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.073 09:52:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.073 09:52:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.073 09:52:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6532288 kB' 'MemAvailable: 9462388 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 497568 kB' 'Inactive: 2753852 kB' 'Active(anon): 128384 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119772 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191844 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103416 kB' 'KernelStack: 6688 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.073 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.073 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.074 09:52:20 -- setup/common.sh@33 -- # echo 0 00:05:22.074 09:52:20 -- setup/common.sh@33 -- # return 0 00:05:22.074 09:52:20 -- setup/hugepages.sh@100 -- # resv=0 00:05:22.074 nr_hugepages=1025 00:05:22.074 09:52:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:22.074 resv_hugepages=0 00:05:22.074 09:52:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.074 surplus_hugepages=0 00:05:22.074 09:52:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.074 anon_hugepages=0 00:05:22.074 09:52:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.074 09:52:20 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:22.074 09:52:20 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:22.074 09:52:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.074 09:52:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.074 09:52:20 -- setup/common.sh@18 -- # local node= 00:05:22.074 09:52:20 -- setup/common.sh@19 -- # local var val 00:05:22.074 09:52:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.074 09:52:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.074 09:52:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.074 09:52:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.074 09:52:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.074 09:52:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6532288 kB' 'MemAvailable: 9462388 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 497612 kB' 'Inactive: 2753852 kB' 'Active(anon): 128428 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119512 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191844 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103416 kB' 'KernelStack: 6688 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.074 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.074 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.075 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.075 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.076 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.076 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.076 09:52:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.076 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.076 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.076 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.076 09:52:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.076 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.076 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.076 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.076 09:52:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.076 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.076 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.076 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.076 09:52:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.076 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.335 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.335 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.336 09:52:20 -- setup/common.sh@33 -- # echo 1025 00:05:22.336 09:52:20 -- setup/common.sh@33 -- # return 0 00:05:22.336 09:52:20 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:22.336 09:52:20 -- setup/hugepages.sh@112 -- # get_nodes 00:05:22.336 09:52:20 -- setup/hugepages.sh@27 -- # local node 00:05:22.336 09:52:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.336 09:52:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:22.336 09:52:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.336 09:52:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.336 09:52:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.336 09:52:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.336 09:52:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.336 09:52:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.336 09:52:20 -- setup/common.sh@18 -- # local node=0 00:05:22.336 09:52:20 -- setup/common.sh@19 -- # local var val 00:05:22.336 09:52:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.336 09:52:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.336 09:52:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.336 09:52:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.336 09:52:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.336 09:52:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6532288 kB' 'MemUsed: 5706824 kB' 'SwapCached: 0 kB' 'Active: 497612 kB' 'Inactive: 2753852 kB' 'Active(anon): 128428 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133524 kB' 'Mapped: 51040 kB' 'AnonPages: 119772 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88428 kB' 'Slab: 191844 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.336 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.336 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # continue 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.337 09:52:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.337 09:52:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.337 09:52:20 -- setup/common.sh@33 -- # echo 0 00:05:22.337 09:52:20 -- setup/common.sh@33 -- # return 0 00:05:22.337 09:52:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.337 09:52:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.337 09:52:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.337 09:52:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.337 node0=1025 expecting 1025 00:05:22.337 09:52:20 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:22.337 09:52:20 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:22.337 00:05:22.337 real 0m0.555s 00:05:22.337 user 0m0.241s 00:05:22.337 sys 0m0.321s 00:05:22.337 09:52:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.337 09:52:20 -- common/autotest_common.sh@10 -- # set +x 00:05:22.337 ************************************ 00:05:22.337 END TEST odd_alloc 00:05:22.337 ************************************ 00:05:22.337 09:52:20 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:22.337 09:52:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.337 09:52:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.337 09:52:20 -- common/autotest_common.sh@10 -- # set +x 00:05:22.337 ************************************ 00:05:22.337 START TEST custom_alloc 00:05:22.337 ************************************ 00:05:22.337 09:52:20 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:22.337 09:52:20 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:22.337 09:52:20 -- setup/hugepages.sh@169 -- # local node 00:05:22.337 09:52:20 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:22.337 09:52:20 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:22.337 09:52:20 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:22.337 09:52:20 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:22.337 09:52:20 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:22.337 09:52:20 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:22.337 09:52:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:22.337 09:52:20 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:22.337 09:52:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:22.337 09:52:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:22.337 09:52:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:22.337 09:52:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:22.337 09:52:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:22.337 09:52:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:22.337 09:52:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:22.337 09:52:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:22.337 09:52:20 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:22.337 09:52:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.337 09:52:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:22.337 09:52:20 -- setup/hugepages.sh@83 -- # : 0 00:05:22.337 09:52:20 -- setup/hugepages.sh@84 -- # : 0 00:05:22.337 09:52:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.337 09:52:20 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:22.337 09:52:20 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:22.337 09:52:20 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:22.337 09:52:20 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:22.337 09:52:20 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:22.337 09:52:20 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:22.337 09:52:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:22.337 09:52:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:22.337 09:52:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:22.337 09:52:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:22.337 09:52:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:22.337 09:52:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:22.337 09:52:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:22.337 09:52:20 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:22.337 09:52:20 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:22.337 09:52:20 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:22.337 09:52:20 -- setup/hugepages.sh@78 -- # return 0 00:05:22.337 09:52:20 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:22.337 09:52:20 -- setup/hugepages.sh@187 -- # setup output 00:05:22.337 09:52:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.337 09:52:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.600 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.600 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.600 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.600 09:52:21 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:22.600 09:52:21 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:22.600 09:52:21 -- setup/hugepages.sh@89 -- # local node 00:05:22.600 09:52:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:22.600 09:52:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:22.600 09:52:21 -- setup/hugepages.sh@92 -- # local surp 00:05:22.600 09:52:21 -- setup/hugepages.sh@93 -- # local resv 00:05:22.600 09:52:21 -- setup/hugepages.sh@94 -- # local anon 00:05:22.600 09:52:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:22.600 09:52:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:22.600 09:52:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:22.600 09:52:21 -- setup/common.sh@18 -- # local node= 00:05:22.600 09:52:21 -- setup/common.sh@19 -- # local var val 00:05:22.600 09:52:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.600 09:52:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.600 09:52:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.600 09:52:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.600 09:52:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.600 09:52:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.600 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.600 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7580876 kB' 'MemAvailable: 10510976 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 497828 kB' 'Inactive: 2753852 kB' 'Active(anon): 128644 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119732 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191912 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103484 kB' 'KernelStack: 6688 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.601 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.601 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.602 09:52:21 -- setup/common.sh@33 -- # echo 0 00:05:22.602 09:52:21 -- setup/common.sh@33 -- # return 0 00:05:22.602 09:52:21 -- setup/hugepages.sh@97 -- # anon=0 00:05:22.602 09:52:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:22.602 09:52:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.602 09:52:21 -- setup/common.sh@18 -- # local node= 00:05:22.602 09:52:21 -- setup/common.sh@19 -- # local var val 00:05:22.602 09:52:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.602 09:52:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.602 09:52:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.602 09:52:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.602 09:52:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.602 09:52:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7581880 kB' 'MemAvailable: 10511980 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 498048 kB' 'Inactive: 2753852 kB' 'Active(anon): 128864 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119952 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191912 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103484 kB' 'KernelStack: 6672 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.602 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.602 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.603 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.603 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.604 09:52:21 -- setup/common.sh@33 -- # echo 0 00:05:22.604 09:52:21 -- setup/common.sh@33 -- # return 0 00:05:22.604 09:52:21 -- setup/hugepages.sh@99 -- # surp=0 00:05:22.604 09:52:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:22.604 09:52:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:22.604 09:52:21 -- setup/common.sh@18 -- # local node= 00:05:22.604 09:52:21 -- setup/common.sh@19 -- # local var val 00:05:22.604 09:52:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.604 09:52:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.604 09:52:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.604 09:52:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.604 09:52:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.604 09:52:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.604 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.604 09:52:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7581628 kB' 'MemAvailable: 10511728 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 497820 kB' 'Inactive: 2753852 kB' 'Active(anon): 128636 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119716 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191912 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103484 kB' 'KernelStack: 6672 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:22.604 09:52:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.867 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.867 09:52:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.868 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.868 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.868 09:52:21 -- setup/common.sh@33 -- # echo 0 00:05:22.868 09:52:21 -- setup/common.sh@33 -- # return 0 00:05:22.868 09:52:21 -- setup/hugepages.sh@100 -- # resv=0 00:05:22.868 nr_hugepages=512 00:05:22.868 09:52:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:22.868 resv_hugepages=0 00:05:22.868 09:52:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.868 surplus_hugepages=0 00:05:22.868 09:52:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.868 anon_hugepages=0 00:05:22.868 09:52:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.868 09:52:21 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:22.868 09:52:21 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:22.868 09:52:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.868 09:52:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.868 09:52:21 -- setup/common.sh@18 -- # local node= 00:05:22.868 09:52:21 -- setup/common.sh@19 -- # local var val 00:05:22.868 09:52:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.868 09:52:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.868 09:52:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.868 09:52:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.868 09:52:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.868 09:52:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7581628 kB' 'MemAvailable: 10511728 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 497796 kB' 'Inactive: 2753852 kB' 'Active(anon): 128612 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119696 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191908 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103480 kB' 'KernelStack: 6672 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.869 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.869 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.870 09:52:21 -- setup/common.sh@33 -- # echo 512 00:05:22.870 09:52:21 -- setup/common.sh@33 -- # return 0 00:05:22.870 09:52:21 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:22.870 09:52:21 -- setup/hugepages.sh@112 -- # get_nodes 00:05:22.870 09:52:21 -- setup/hugepages.sh@27 -- # local node 00:05:22.870 09:52:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.870 09:52:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:22.870 09:52:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.870 09:52:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.870 09:52:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.870 09:52:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.870 09:52:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.870 09:52:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.870 09:52:21 -- setup/common.sh@18 -- # local node=0 00:05:22.870 09:52:21 -- setup/common.sh@19 -- # local var val 00:05:22.870 09:52:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.870 09:52:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.870 09:52:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.870 09:52:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.870 09:52:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.870 09:52:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7581628 kB' 'MemUsed: 4657484 kB' 'SwapCached: 0 kB' 'Active: 497808 kB' 'Inactive: 2753852 kB' 'Active(anon): 128624 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133524 kB' 'Mapped: 51040 kB' 'AnonPages: 119704 kB' 'Shmem: 10488 kB' 'KernelStack: 6672 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88428 kB' 'Slab: 191904 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.870 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.870 09:52:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # continue 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.871 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.871 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.871 09:52:21 -- setup/common.sh@33 -- # echo 0 00:05:22.871 09:52:21 -- setup/common.sh@33 -- # return 0 00:05:22.871 09:52:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.871 09:52:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.871 09:52:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.871 09:52:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.871 node0=512 expecting 512 00:05:22.871 09:52:21 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:22.871 09:52:21 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:22.871 00:05:22.871 real 0m0.533s 00:05:22.871 user 0m0.285s 00:05:22.871 sys 0m0.284s 00:05:22.871 09:52:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.871 09:52:21 -- common/autotest_common.sh@10 -- # set +x 00:05:22.871 ************************************ 00:05:22.871 END TEST custom_alloc 00:05:22.871 ************************************ 00:05:22.871 09:52:21 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:22.871 09:52:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.871 09:52:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.871 09:52:21 -- common/autotest_common.sh@10 -- # set +x 00:05:22.871 ************************************ 00:05:22.871 START TEST no_shrink_alloc 00:05:22.871 ************************************ 00:05:22.871 09:52:21 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:22.871 09:52:21 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:22.871 09:52:21 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:22.871 09:52:21 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:22.871 09:52:21 -- setup/hugepages.sh@51 -- # shift 00:05:22.871 09:52:21 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:22.871 09:52:21 -- setup/hugepages.sh@52 -- # local node_ids 00:05:22.871 09:52:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:22.871 09:52:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:22.871 09:52:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:22.871 09:52:21 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:22.871 09:52:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:22.871 09:52:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:22.871 09:52:21 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:22.871 09:52:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:22.871 09:52:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:22.871 09:52:21 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:22.871 09:52:21 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:22.871 09:52:21 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:22.871 09:52:21 -- setup/hugepages.sh@73 -- # return 0 00:05:22.871 09:52:21 -- setup/hugepages.sh@198 -- # setup output 00:05:22.871 09:52:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.871 09:52:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.131 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.131 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.131 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.131 09:52:21 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:23.131 09:52:21 -- setup/hugepages.sh@89 -- # local node 00:05:23.131 09:52:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.131 09:52:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.131 09:52:21 -- setup/hugepages.sh@92 -- # local surp 00:05:23.131 09:52:21 -- setup/hugepages.sh@93 -- # local resv 00:05:23.131 09:52:21 -- setup/hugepages.sh@94 -- # local anon 00:05:23.131 09:52:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.131 09:52:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.131 09:52:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.131 09:52:21 -- setup/common.sh@18 -- # local node= 00:05:23.131 09:52:21 -- setup/common.sh@19 -- # local var val 00:05:23.131 09:52:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.131 09:52:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.131 09:52:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.131 09:52:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.131 09:52:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.131 09:52:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 09:52:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6533356 kB' 'MemAvailable: 9463456 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 498228 kB' 'Inactive: 2753852 kB' 'Active(anon): 129044 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120124 kB' 'Mapped: 51156 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191920 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103492 kB' 'KernelStack: 6664 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.131 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.131 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.394 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.394 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.395 09:52:21 -- setup/common.sh@33 -- # echo 0 00:05:23.395 09:52:21 -- setup/common.sh@33 -- # return 0 00:05:23.395 09:52:21 -- setup/hugepages.sh@97 -- # anon=0 00:05:23.395 09:52:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.395 09:52:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.395 09:52:21 -- setup/common.sh@18 -- # local node= 00:05:23.395 09:52:21 -- setup/common.sh@19 -- # local var val 00:05:23.395 09:52:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.395 09:52:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.395 09:52:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.395 09:52:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.395 09:52:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.395 09:52:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6533608 kB' 'MemAvailable: 9463708 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 497964 kB' 'Inactive: 2753852 kB' 'Active(anon): 128780 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119864 kB' 'Mapped: 51160 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191916 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103488 kB' 'KernelStack: 6664 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.395 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.395 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.396 09:52:21 -- setup/common.sh@33 -- # echo 0 00:05:23.396 09:52:21 -- setup/common.sh@33 -- # return 0 00:05:23.396 09:52:21 -- setup/hugepages.sh@99 -- # surp=0 00:05:23.396 09:52:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.396 09:52:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.396 09:52:21 -- setup/common.sh@18 -- # local node= 00:05:23.396 09:52:21 -- setup/common.sh@19 -- # local var val 00:05:23.396 09:52:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.396 09:52:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.396 09:52:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.396 09:52:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.396 09:52:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.396 09:52:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6533608 kB' 'MemAvailable: 9463708 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 497664 kB' 'Inactive: 2753852 kB' 'Active(anon): 128480 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119532 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191924 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103496 kB' 'KernelStack: 6656 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.396 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.396 09:52:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.397 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.397 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.398 09:52:21 -- setup/common.sh@33 -- # echo 0 00:05:23.398 09:52:21 -- setup/common.sh@33 -- # return 0 00:05:23.398 09:52:21 -- setup/hugepages.sh@100 -- # resv=0 00:05:23.398 nr_hugepages=1024 00:05:23.398 09:52:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:23.398 resv_hugepages=0 00:05:23.398 09:52:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:23.398 surplus_hugepages=0 00:05:23.398 09:52:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:23.398 anon_hugepages=0 00:05:23.398 09:52:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:23.398 09:52:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.398 09:52:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:23.398 09:52:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:23.398 09:52:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:23.398 09:52:21 -- setup/common.sh@18 -- # local node= 00:05:23.398 09:52:21 -- setup/common.sh@19 -- # local var val 00:05:23.398 09:52:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.398 09:52:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.398 09:52:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.398 09:52:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.398 09:52:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.398 09:52:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6533608 kB' 'MemAvailable: 9463708 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 497600 kB' 'Inactive: 2753852 kB' 'Active(anon): 128416 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119468 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88428 kB' 'Slab: 191924 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103496 kB' 'KernelStack: 6640 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.398 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.398 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.399 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.399 09:52:21 -- setup/common.sh@33 -- # echo 1024 00:05:23.399 09:52:21 -- setup/common.sh@33 -- # return 0 00:05:23.399 09:52:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.399 09:52:21 -- setup/hugepages.sh@112 -- # get_nodes 00:05:23.399 09:52:21 -- setup/hugepages.sh@27 -- # local node 00:05:23.399 09:52:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.399 09:52:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:23.399 09:52:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:23.399 09:52:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:23.399 09:52:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.399 09:52:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.399 09:52:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:23.399 09:52:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.399 09:52:21 -- setup/common.sh@18 -- # local node=0 00:05:23.399 09:52:21 -- setup/common.sh@19 -- # local var val 00:05:23.399 09:52:21 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.399 09:52:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.399 09:52:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:23.399 09:52:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:23.399 09:52:21 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.399 09:52:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.399 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6533608 kB' 'MemUsed: 5705504 kB' 'SwapCached: 0 kB' 'Active: 497600 kB' 'Inactive: 2753852 kB' 'Active(anon): 128416 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133524 kB' 'Mapped: 51040 kB' 'AnonPages: 119728 kB' 'Shmem: 10488 kB' 'KernelStack: 6708 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88428 kB' 'Slab: 191924 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 103496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # continue 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.400 09:52:21 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.400 09:52:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.400 09:52:21 -- setup/common.sh@33 -- # echo 0 00:05:23.400 09:52:21 -- setup/common.sh@33 -- # return 0 00:05:23.400 09:52:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:23.400 09:52:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:23.400 09:52:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:23.400 09:52:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:23.400 node0=1024 expecting 1024 00:05:23.400 09:52:21 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:23.400 09:52:21 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:23.400 09:52:21 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:23.400 09:52:21 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:23.400 09:52:21 -- setup/hugepages.sh@202 -- # setup output 00:05:23.400 09:52:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.400 09:52:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.660 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.660 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.660 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.660 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:23.660 09:52:22 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:23.660 09:52:22 -- setup/hugepages.sh@89 -- # local node 00:05:23.660 09:52:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.660 09:52:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.660 09:52:22 -- setup/hugepages.sh@92 -- # local surp 00:05:23.660 09:52:22 -- setup/hugepages.sh@93 -- # local resv 00:05:23.660 09:52:22 -- setup/hugepages.sh@94 -- # local anon 00:05:23.660 09:52:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.660 09:52:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.660 09:52:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.660 09:52:22 -- setup/common.sh@18 -- # local node= 00:05:23.660 09:52:22 -- setup/common.sh@19 -- # local var val 00:05:23.660 09:52:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.660 09:52:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.660 09:52:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.660 09:52:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.660 09:52:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.660 09:52:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.660 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.660 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6538520 kB' 'MemAvailable: 9468608 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 495704 kB' 'Inactive: 2753852 kB' 'Active(anon): 126520 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117676 kB' 'Mapped: 50336 kB' 'Shmem: 10488 kB' 'KReclaimable: 88408 kB' 'Slab: 191660 kB' 'SReclaimable: 88408 kB' 'SUnreclaim: 103252 kB' 'KernelStack: 6584 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.661 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.661 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.662 09:52:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.662 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.662 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.662 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.662 09:52:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.662 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.662 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.923 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.923 09:52:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.923 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.923 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.923 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.923 09:52:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.923 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.923 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.923 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.923 09:52:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.923 09:52:22 -- setup/common.sh@33 -- # echo 0 00:05:23.923 09:52:22 -- setup/common.sh@33 -- # return 0 00:05:23.923 09:52:22 -- setup/hugepages.sh@97 -- # anon=0 00:05:23.923 09:52:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.923 09:52:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.923 09:52:22 -- setup/common.sh@18 -- # local node= 00:05:23.923 09:52:22 -- setup/common.sh@19 -- # local var val 00:05:23.923 09:52:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.923 09:52:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.923 09:52:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.923 09:52:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.923 09:52:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.923 09:52:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.923 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.923 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.923 09:52:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6538776 kB' 'MemAvailable: 9468864 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 495380 kB' 'Inactive: 2753852 kB' 'Active(anon): 126196 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117288 kB' 'Mapped: 50212 kB' 'Shmem: 10488 kB' 'KReclaimable: 88408 kB' 'Slab: 191656 kB' 'SReclaimable: 88408 kB' 'SUnreclaim: 103248 kB' 'KernelStack: 6592 kB' 'PageTables: 3932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:23.923 09:52:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.923 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.923 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.923 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.923 09:52:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.923 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.923 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.924 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.924 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.925 09:52:22 -- setup/common.sh@33 -- # echo 0 00:05:23.925 09:52:22 -- setup/common.sh@33 -- # return 0 00:05:23.925 09:52:22 -- setup/hugepages.sh@99 -- # surp=0 00:05:23.925 09:52:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.925 09:52:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.925 09:52:22 -- setup/common.sh@18 -- # local node= 00:05:23.925 09:52:22 -- setup/common.sh@19 -- # local var val 00:05:23.925 09:52:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.925 09:52:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.925 09:52:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.925 09:52:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.925 09:52:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.925 09:52:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6539036 kB' 'MemAvailable: 9469124 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 495380 kB' 'Inactive: 2753852 kB' 'Active(anon): 126196 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117340 kB' 'Mapped: 50212 kB' 'Shmem: 10488 kB' 'KReclaimable: 88408 kB' 'Slab: 191656 kB' 'SReclaimable: 88408 kB' 'SUnreclaim: 103248 kB' 'KernelStack: 6576 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.925 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.925 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.926 09:52:22 -- setup/common.sh@33 -- # echo 0 00:05:23.926 09:52:22 -- setup/common.sh@33 -- # return 0 00:05:23.926 09:52:22 -- setup/hugepages.sh@100 -- # resv=0 00:05:23.926 nr_hugepages=1024 00:05:23.926 09:52:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:23.926 resv_hugepages=0 00:05:23.926 09:52:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:23.926 surplus_hugepages=0 00:05:23.926 09:52:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:23.926 09:52:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:23.926 anon_hugepages=0 00:05:23.926 09:52:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.926 09:52:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:23.926 09:52:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:23.926 09:52:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:23.926 09:52:22 -- setup/common.sh@18 -- # local node= 00:05:23.926 09:52:22 -- setup/common.sh@19 -- # local var val 00:05:23.926 09:52:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.926 09:52:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.926 09:52:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.926 09:52:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.926 09:52:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.926 09:52:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6539036 kB' 'MemAvailable: 9469124 kB' 'Buffers: 3704 kB' 'Cached: 3129820 kB' 'SwapCached: 0 kB' 'Active: 495312 kB' 'Inactive: 2753852 kB' 'Active(anon): 126128 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117220 kB' 'Mapped: 50212 kB' 'Shmem: 10488 kB' 'KReclaimable: 88408 kB' 'Slab: 191656 kB' 'SReclaimable: 88408 kB' 'SUnreclaim: 103248 kB' 'KernelStack: 6560 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 5056512 kB' 'DirectMap1G: 9437184 kB' 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.926 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.926 09:52:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.927 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.927 09:52:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.927 09:52:22 -- setup/common.sh@33 -- # echo 1024 00:05:23.927 09:52:22 -- setup/common.sh@33 -- # return 0 00:05:23.927 09:52:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.927 09:52:22 -- setup/hugepages.sh@112 -- # get_nodes 00:05:23.927 09:52:22 -- setup/hugepages.sh@27 -- # local node 00:05:23.927 09:52:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.927 09:52:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:23.927 09:52:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:23.927 09:52:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:23.928 09:52:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.928 09:52:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.928 09:52:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:23.928 09:52:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.928 09:52:22 -- setup/common.sh@18 -- # local node=0 00:05:23.928 09:52:22 -- setup/common.sh@19 -- # local var val 00:05:23.928 09:52:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.928 09:52:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.928 09:52:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:23.928 09:52:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:23.928 09:52:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.928 09:52:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6539620 kB' 'MemUsed: 5699492 kB' 'SwapCached: 0 kB' 'Active: 494940 kB' 'Inactive: 2753852 kB' 'Active(anon): 125756 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2753852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133524 kB' 'Mapped: 50472 kB' 'AnonPages: 116880 kB' 'Shmem: 10488 kB' 'KernelStack: 6608 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88408 kB' 'Slab: 191652 kB' 'SReclaimable: 88408 kB' 'SUnreclaim: 103244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.928 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.928 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # continue 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.929 09:52:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.929 09:52:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.929 09:52:22 -- setup/common.sh@33 -- # echo 0 00:05:23.929 09:52:22 -- setup/common.sh@33 -- # return 0 00:05:23.929 09:52:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:23.929 09:52:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:23.929 09:52:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:23.929 09:52:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:23.929 node0=1024 expecting 1024 00:05:23.929 09:52:22 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:23.929 09:52:22 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:23.929 00:05:23.929 real 0m1.051s 00:05:23.929 user 0m0.515s 00:05:23.929 sys 0m0.604s 00:05:23.929 09:52:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.929 09:52:22 -- common/autotest_common.sh@10 -- # set +x 00:05:23.929 ************************************ 00:05:23.929 END TEST no_shrink_alloc 00:05:23.929 ************************************ 00:05:23.929 09:52:22 -- setup/hugepages.sh@217 -- # clear_hp 00:05:23.929 09:52:22 -- setup/hugepages.sh@37 -- # local node hp 00:05:23.929 09:52:22 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:23.929 09:52:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:23.929 09:52:22 -- setup/hugepages.sh@41 -- # echo 0 00:05:23.929 09:52:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:23.929 09:52:22 -- setup/hugepages.sh@41 -- # echo 0 00:05:23.929 09:52:22 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:23.929 09:52:22 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:23.929 ************************************ 00:05:23.929 END TEST hugepages 00:05:23.929 ************************************ 00:05:23.929 00:05:23.929 real 0m4.844s 00:05:23.929 user 0m2.284s 00:05:23.929 sys 0m2.559s 00:05:23.929 09:52:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.929 09:52:22 -- common/autotest_common.sh@10 -- # set +x 00:05:23.929 09:52:22 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:23.929 09:52:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.929 09:52:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.929 09:52:22 -- common/autotest_common.sh@10 -- # set +x 00:05:23.929 ************************************ 00:05:23.929 START TEST driver 00:05:23.929 ************************************ 00:05:23.929 09:52:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:24.188 * Looking for test storage... 00:05:24.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:24.188 09:52:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:24.188 09:52:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:24.188 09:52:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:24.188 09:52:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:24.188 09:52:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:24.188 09:52:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:24.188 09:52:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:24.188 09:52:22 -- scripts/common.sh@335 -- # IFS=.-: 00:05:24.188 09:52:22 -- scripts/common.sh@335 -- # read -ra ver1 00:05:24.188 09:52:22 -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.188 09:52:22 -- scripts/common.sh@336 -- # read -ra ver2 00:05:24.188 09:52:22 -- scripts/common.sh@337 -- # local 'op=<' 00:05:24.188 09:52:22 -- scripts/common.sh@339 -- # ver1_l=2 00:05:24.188 09:52:22 -- scripts/common.sh@340 -- # ver2_l=1 00:05:24.188 09:52:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:24.188 09:52:22 -- scripts/common.sh@343 -- # case "$op" in 00:05:24.188 09:52:22 -- scripts/common.sh@344 -- # : 1 00:05:24.188 09:52:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:24.188 09:52:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.188 09:52:22 -- scripts/common.sh@364 -- # decimal 1 00:05:24.188 09:52:22 -- scripts/common.sh@352 -- # local d=1 00:05:24.188 09:52:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.188 09:52:22 -- scripts/common.sh@354 -- # echo 1 00:05:24.188 09:52:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:24.188 09:52:22 -- scripts/common.sh@365 -- # decimal 2 00:05:24.188 09:52:22 -- scripts/common.sh@352 -- # local d=2 00:05:24.188 09:52:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.188 09:52:22 -- scripts/common.sh@354 -- # echo 2 00:05:24.188 09:52:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:24.188 09:52:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:24.188 09:52:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:24.188 09:52:22 -- scripts/common.sh@367 -- # return 0 00:05:24.188 09:52:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.188 09:52:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:24.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.188 --rc genhtml_branch_coverage=1 00:05:24.188 --rc genhtml_function_coverage=1 00:05:24.188 --rc genhtml_legend=1 00:05:24.188 --rc geninfo_all_blocks=1 00:05:24.188 --rc geninfo_unexecuted_blocks=1 00:05:24.188 00:05:24.188 ' 00:05:24.188 09:52:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:24.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.188 --rc genhtml_branch_coverage=1 00:05:24.188 --rc genhtml_function_coverage=1 00:05:24.188 --rc genhtml_legend=1 00:05:24.188 --rc geninfo_all_blocks=1 00:05:24.188 --rc geninfo_unexecuted_blocks=1 00:05:24.188 00:05:24.188 ' 00:05:24.188 09:52:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:24.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.188 --rc genhtml_branch_coverage=1 00:05:24.188 --rc genhtml_function_coverage=1 00:05:24.188 --rc genhtml_legend=1 00:05:24.188 --rc geninfo_all_blocks=1 00:05:24.188 --rc geninfo_unexecuted_blocks=1 00:05:24.188 00:05:24.188 ' 00:05:24.188 09:52:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:24.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.188 --rc genhtml_branch_coverage=1 00:05:24.188 --rc genhtml_function_coverage=1 00:05:24.188 --rc genhtml_legend=1 00:05:24.188 --rc geninfo_all_blocks=1 00:05:24.188 --rc geninfo_unexecuted_blocks=1 00:05:24.188 00:05:24.188 ' 00:05:24.188 09:52:22 -- setup/driver.sh@68 -- # setup reset 00:05:24.188 09:52:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:24.188 09:52:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.756 09:52:23 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:24.756 09:52:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.756 09:52:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.756 09:52:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.756 ************************************ 00:05:24.756 START TEST guess_driver 00:05:24.756 ************************************ 00:05:24.756 09:52:23 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:24.756 09:52:23 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:24.756 09:52:23 -- setup/driver.sh@47 -- # local fail=0 00:05:24.756 09:52:23 -- setup/driver.sh@49 -- # pick_driver 00:05:24.756 09:52:23 -- setup/driver.sh@36 -- # vfio 00:05:24.756 09:52:23 -- setup/driver.sh@21 -- # local iommu_grups 00:05:24.756 09:52:23 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:24.756 09:52:23 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:24.756 09:52:23 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:24.756 09:52:23 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:24.756 09:52:23 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:24.756 09:52:23 -- setup/driver.sh@32 -- # return 1 00:05:24.756 09:52:23 -- setup/driver.sh@38 -- # uio 00:05:24.756 09:52:23 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:24.756 09:52:23 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:24.756 09:52:23 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:24.756 09:52:23 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:24.756 09:52:23 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:24.756 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:24.756 09:52:23 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:24.756 09:52:23 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:24.756 09:52:23 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:24.756 09:52:23 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:24.756 Looking for driver=uio_pci_generic 00:05:24.756 09:52:23 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.756 09:52:23 -- setup/driver.sh@45 -- # setup output config 00:05:24.756 09:52:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.756 09:52:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:25.350 09:52:23 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:25.350 09:52:23 -- setup/driver.sh@58 -- # continue 00:05:25.350 09:52:23 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.609 09:52:23 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.609 09:52:23 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:25.609 09:52:23 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.609 09:52:24 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.609 09:52:24 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:25.609 09:52:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.609 09:52:24 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:25.609 09:52:24 -- setup/driver.sh@65 -- # setup reset 00:05:25.609 09:52:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:25.609 09:52:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.176 ************************************ 00:05:26.176 END TEST guess_driver 00:05:26.176 ************************************ 00:05:26.176 00:05:26.176 real 0m1.399s 00:05:26.176 user 0m0.535s 00:05:26.176 sys 0m0.862s 00:05:26.176 09:52:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.176 09:52:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.176 00:05:26.176 real 0m2.183s 00:05:26.176 user 0m0.874s 00:05:26.176 sys 0m1.377s 00:05:26.176 09:52:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.176 09:52:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.176 ************************************ 00:05:26.176 END TEST driver 00:05:26.176 ************************************ 00:05:26.176 09:52:24 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:26.176 09:52:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.176 09:52:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.176 09:52:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.176 ************************************ 00:05:26.176 START TEST devices 00:05:26.176 ************************************ 00:05:26.176 09:52:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:26.436 * Looking for test storage... 00:05:26.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:26.436 09:52:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:26.436 09:52:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:26.436 09:52:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:26.436 09:52:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:26.436 09:52:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:26.436 09:52:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:26.436 09:52:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:26.436 09:52:24 -- scripts/common.sh@335 -- # IFS=.-: 00:05:26.436 09:52:24 -- scripts/common.sh@335 -- # read -ra ver1 00:05:26.436 09:52:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.436 09:52:24 -- scripts/common.sh@336 -- # read -ra ver2 00:05:26.436 09:52:24 -- scripts/common.sh@337 -- # local 'op=<' 00:05:26.436 09:52:24 -- scripts/common.sh@339 -- # ver1_l=2 00:05:26.436 09:52:24 -- scripts/common.sh@340 -- # ver2_l=1 00:05:26.436 09:52:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:26.436 09:52:24 -- scripts/common.sh@343 -- # case "$op" in 00:05:26.436 09:52:24 -- scripts/common.sh@344 -- # : 1 00:05:26.436 09:52:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:26.436 09:52:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.436 09:52:24 -- scripts/common.sh@364 -- # decimal 1 00:05:26.436 09:52:24 -- scripts/common.sh@352 -- # local d=1 00:05:26.436 09:52:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.436 09:52:24 -- scripts/common.sh@354 -- # echo 1 00:05:26.436 09:52:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:26.436 09:52:24 -- scripts/common.sh@365 -- # decimal 2 00:05:26.436 09:52:24 -- scripts/common.sh@352 -- # local d=2 00:05:26.436 09:52:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.436 09:52:24 -- scripts/common.sh@354 -- # echo 2 00:05:26.436 09:52:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:26.436 09:52:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:26.436 09:52:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:26.436 09:52:24 -- scripts/common.sh@367 -- # return 0 00:05:26.436 09:52:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.436 09:52:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.436 --rc genhtml_branch_coverage=1 00:05:26.436 --rc genhtml_function_coverage=1 00:05:26.436 --rc genhtml_legend=1 00:05:26.436 --rc geninfo_all_blocks=1 00:05:26.436 --rc geninfo_unexecuted_blocks=1 00:05:26.436 00:05:26.436 ' 00:05:26.436 09:52:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.436 --rc genhtml_branch_coverage=1 00:05:26.436 --rc genhtml_function_coverage=1 00:05:26.436 --rc genhtml_legend=1 00:05:26.436 --rc geninfo_all_blocks=1 00:05:26.436 --rc geninfo_unexecuted_blocks=1 00:05:26.436 00:05:26.436 ' 00:05:26.436 09:52:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.436 --rc genhtml_branch_coverage=1 00:05:26.436 --rc genhtml_function_coverage=1 00:05:26.436 --rc genhtml_legend=1 00:05:26.436 --rc geninfo_all_blocks=1 00:05:26.436 --rc geninfo_unexecuted_blocks=1 00:05:26.436 00:05:26.436 ' 00:05:26.436 09:52:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.436 --rc genhtml_branch_coverage=1 00:05:26.436 --rc genhtml_function_coverage=1 00:05:26.436 --rc genhtml_legend=1 00:05:26.436 --rc geninfo_all_blocks=1 00:05:26.436 --rc geninfo_unexecuted_blocks=1 00:05:26.436 00:05:26.436 ' 00:05:26.436 09:52:24 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:26.436 09:52:24 -- setup/devices.sh@192 -- # setup reset 00:05:26.436 09:52:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:26.436 09:52:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:27.380 09:52:25 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:27.380 09:52:25 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:27.380 09:52:25 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:27.380 09:52:25 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:27.380 09:52:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:27.380 09:52:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:27.380 09:52:25 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:27.380 09:52:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:27.380 09:52:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:27.380 09:52:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:27.380 09:52:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:27.380 09:52:25 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:27.380 09:52:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:27.380 09:52:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:27.380 09:52:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:27.380 09:52:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:27.380 09:52:25 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:27.380 09:52:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:27.380 09:52:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:27.380 09:52:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:27.380 09:52:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:27.380 09:52:25 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:27.380 09:52:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:27.380 09:52:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:27.380 09:52:25 -- setup/devices.sh@196 -- # blocks=() 00:05:27.380 09:52:25 -- setup/devices.sh@196 -- # declare -a blocks 00:05:27.380 09:52:25 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:27.380 09:52:25 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:27.380 09:52:25 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:27.380 09:52:25 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:27.380 09:52:25 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:27.380 09:52:25 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:27.380 09:52:25 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:27.380 09:52:25 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:27.380 09:52:25 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:27.380 09:52:25 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:27.380 09:52:25 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:27.380 No valid GPT data, bailing 00:05:27.380 09:52:25 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:27.380 09:52:25 -- scripts/common.sh@393 -- # pt= 00:05:27.380 09:52:25 -- scripts/common.sh@394 -- # return 1 00:05:27.380 09:52:25 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:27.380 09:52:25 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:27.380 09:52:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:27.380 09:52:25 -- setup/common.sh@80 -- # echo 5368709120 00:05:27.380 09:52:25 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:27.380 09:52:25 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:27.380 09:52:25 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:27.380 09:52:25 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:27.380 09:52:25 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:27.380 09:52:25 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:27.380 09:52:25 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:27.380 09:52:25 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:27.380 09:52:25 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:27.380 09:52:25 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:27.380 09:52:25 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:27.380 No valid GPT data, bailing 00:05:27.380 09:52:25 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:27.380 09:52:25 -- scripts/common.sh@393 -- # pt= 00:05:27.380 09:52:25 -- scripts/common.sh@394 -- # return 1 00:05:27.380 09:52:25 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:27.380 09:52:25 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:27.380 09:52:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:27.380 09:52:25 -- setup/common.sh@80 -- # echo 4294967296 00:05:27.380 09:52:25 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:27.380 09:52:25 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:27.380 09:52:25 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:27.380 09:52:25 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:27.380 09:52:25 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:27.380 09:52:25 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:27.380 09:52:25 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:27.380 09:52:25 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:27.380 09:52:25 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:27.380 09:52:25 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:27.380 09:52:25 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:27.380 No valid GPT data, bailing 00:05:27.380 09:52:25 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:27.380 09:52:25 -- scripts/common.sh@393 -- # pt= 00:05:27.380 09:52:25 -- scripts/common.sh@394 -- # return 1 00:05:27.380 09:52:25 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:27.380 09:52:25 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:27.380 09:52:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:27.380 09:52:25 -- setup/common.sh@80 -- # echo 4294967296 00:05:27.380 09:52:25 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:27.380 09:52:25 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:27.380 09:52:25 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:27.380 09:52:25 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:27.380 09:52:25 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:27.380 09:52:25 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:27.380 09:52:25 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:27.380 09:52:25 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:27.380 09:52:25 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:27.380 09:52:25 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:27.380 09:52:25 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:27.380 No valid GPT data, bailing 00:05:27.380 09:52:25 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:27.380 09:52:25 -- scripts/common.sh@393 -- # pt= 00:05:27.380 09:52:25 -- scripts/common.sh@394 -- # return 1 00:05:27.380 09:52:25 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:27.380 09:52:25 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:27.380 09:52:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:27.380 09:52:25 -- setup/common.sh@80 -- # echo 4294967296 00:05:27.380 09:52:25 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:27.380 09:52:25 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:27.380 09:52:25 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:27.380 09:52:25 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:27.380 09:52:25 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:27.380 09:52:25 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:27.380 09:52:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.380 09:52:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.380 09:52:25 -- common/autotest_common.sh@10 -- # set +x 00:05:27.380 ************************************ 00:05:27.380 START TEST nvme_mount 00:05:27.380 ************************************ 00:05:27.380 09:52:25 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:27.380 09:52:25 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:27.380 09:52:25 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:27.380 09:52:25 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.380 09:52:25 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:27.380 09:52:25 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:27.380 09:52:25 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:27.380 09:52:25 -- setup/common.sh@40 -- # local part_no=1 00:05:27.380 09:52:25 -- setup/common.sh@41 -- # local size=1073741824 00:05:27.380 09:52:25 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:27.380 09:52:25 -- setup/common.sh@44 -- # parts=() 00:05:27.380 09:52:25 -- setup/common.sh@44 -- # local parts 00:05:27.381 09:52:25 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:27.381 09:52:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.381 09:52:25 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:27.381 09:52:25 -- setup/common.sh@46 -- # (( part++ )) 00:05:27.381 09:52:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.381 09:52:25 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:27.381 09:52:25 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:27.381 09:52:25 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:28.755 Creating new GPT entries in memory. 00:05:28.755 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:28.755 other utilities. 00:05:28.755 09:52:27 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:28.755 09:52:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.755 09:52:27 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:28.755 09:52:27 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:28.755 09:52:27 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:29.692 Creating new GPT entries in memory. 00:05:29.692 The operation has completed successfully. 00:05:29.692 09:52:28 -- setup/common.sh@57 -- # (( part++ )) 00:05:29.692 09:52:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:29.692 09:52:28 -- setup/common.sh@62 -- # wait 65894 00:05:29.692 09:52:28 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.692 09:52:28 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:29.692 09:52:28 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.692 09:52:28 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:29.692 09:52:28 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:29.692 09:52:28 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.692 09:52:28 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:29.692 09:52:28 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:29.692 09:52:28 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:29.692 09:52:28 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.692 09:52:28 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:29.692 09:52:28 -- setup/devices.sh@53 -- # local found=0 00:05:29.692 09:52:28 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:29.692 09:52:28 -- setup/devices.sh@56 -- # : 00:05:29.692 09:52:28 -- setup/devices.sh@59 -- # local pci status 00:05:29.692 09:52:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:29.692 09:52:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.692 09:52:28 -- setup/devices.sh@47 -- # setup output config 00:05:29.692 09:52:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.692 09:52:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:29.692 09:52:28 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:29.692 09:52:28 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:29.692 09:52:28 -- setup/devices.sh@63 -- # found=1 00:05:29.692 09:52:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.692 09:52:28 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:29.692 09:52:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.259 09:52:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.259 09:52:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.259 09:52:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.259 09:52:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.259 09:52:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.259 09:52:28 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:30.259 09:52:28 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.259 09:52:28 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:30.259 09:52:28 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:30.259 09:52:28 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:30.259 09:52:28 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.259 09:52:28 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.259 09:52:28 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:30.259 09:52:28 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:30.259 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:30.259 09:52:28 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:30.259 09:52:28 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:30.518 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:30.518 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:30.518 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:30.518 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:30.518 09:52:29 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:30.518 09:52:29 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:30.518 09:52:29 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.518 09:52:29 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:30.518 09:52:29 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:30.518 09:52:29 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.518 09:52:29 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:30.518 09:52:29 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:30.518 09:52:29 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:30.518 09:52:29 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.518 09:52:29 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:30.518 09:52:29 -- setup/devices.sh@53 -- # local found=0 00:05:30.518 09:52:29 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:30.518 09:52:29 -- setup/devices.sh@56 -- # : 00:05:30.518 09:52:29 -- setup/devices.sh@59 -- # local pci status 00:05:30.518 09:52:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.518 09:52:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:30.518 09:52:29 -- setup/devices.sh@47 -- # setup output config 00:05:30.518 09:52:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.518 09:52:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.777 09:52:29 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.777 09:52:29 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:30.777 09:52:29 -- setup/devices.sh@63 -- # found=1 00:05:30.777 09:52:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.777 09:52:29 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:30.777 09:52:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.035 09:52:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.035 09:52:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.035 09:52:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.035 09:52:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.294 09:52:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.294 09:52:29 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:31.294 09:52:29 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.294 09:52:29 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:31.294 09:52:29 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:31.294 09:52:29 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.294 09:52:29 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:31.294 09:52:29 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:31.294 09:52:29 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:31.294 09:52:29 -- setup/devices.sh@50 -- # local mount_point= 00:05:31.294 09:52:29 -- setup/devices.sh@51 -- # local test_file= 00:05:31.294 09:52:29 -- setup/devices.sh@53 -- # local found=0 00:05:31.294 09:52:29 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:31.294 09:52:29 -- setup/devices.sh@59 -- # local pci status 00:05:31.294 09:52:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.294 09:52:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:31.294 09:52:29 -- setup/devices.sh@47 -- # setup output config 00:05:31.294 09:52:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.294 09:52:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:31.553 09:52:29 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.553 09:52:29 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:31.553 09:52:29 -- setup/devices.sh@63 -- # found=1 00:05:31.553 09:52:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.553 09:52:29 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.553 09:52:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.811 09:52:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.811 09:52:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.811 09:52:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.811 09:52:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.811 09:52:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.811 09:52:30 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:31.811 09:52:30 -- setup/devices.sh@68 -- # return 0 00:05:31.811 09:52:30 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:31.811 09:52:30 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:31.811 09:52:30 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.811 09:52:30 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.811 09:52:30 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:32.069 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:32.069 00:05:32.069 real 0m4.447s 00:05:32.069 user 0m0.986s 00:05:32.069 sys 0m1.142s 00:05:32.069 09:52:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.069 09:52:30 -- common/autotest_common.sh@10 -- # set +x 00:05:32.069 ************************************ 00:05:32.069 END TEST nvme_mount 00:05:32.069 ************************************ 00:05:32.069 09:52:30 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:32.069 09:52:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.069 09:52:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.069 09:52:30 -- common/autotest_common.sh@10 -- # set +x 00:05:32.069 ************************************ 00:05:32.069 START TEST dm_mount 00:05:32.069 ************************************ 00:05:32.069 09:52:30 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:32.069 09:52:30 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:32.069 09:52:30 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:32.069 09:52:30 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:32.069 09:52:30 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:32.069 09:52:30 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:32.069 09:52:30 -- setup/common.sh@40 -- # local part_no=2 00:05:32.069 09:52:30 -- setup/common.sh@41 -- # local size=1073741824 00:05:32.069 09:52:30 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:32.069 09:52:30 -- setup/common.sh@44 -- # parts=() 00:05:32.069 09:52:30 -- setup/common.sh@44 -- # local parts 00:05:32.069 09:52:30 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:32.069 09:52:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:32.069 09:52:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:32.069 09:52:30 -- setup/common.sh@46 -- # (( part++ )) 00:05:32.069 09:52:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:32.069 09:52:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:32.069 09:52:30 -- setup/common.sh@46 -- # (( part++ )) 00:05:32.069 09:52:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:32.069 09:52:30 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:32.069 09:52:30 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:32.069 09:52:30 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:33.004 Creating new GPT entries in memory. 00:05:33.004 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:33.004 other utilities. 00:05:33.004 09:52:31 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:33.004 09:52:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:33.004 09:52:31 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:33.004 09:52:31 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:33.004 09:52:31 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:33.941 Creating new GPT entries in memory. 00:05:33.941 The operation has completed successfully. 00:05:33.941 09:52:32 -- setup/common.sh@57 -- # (( part++ )) 00:05:33.941 09:52:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:33.941 09:52:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:33.941 09:52:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:33.941 09:52:32 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:35.318 The operation has completed successfully. 00:05:35.318 09:52:33 -- setup/common.sh@57 -- # (( part++ )) 00:05:35.318 09:52:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:35.318 09:52:33 -- setup/common.sh@62 -- # wait 66353 00:05:35.318 09:52:33 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:35.318 09:52:33 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.318 09:52:33 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:35.319 09:52:33 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:35.319 09:52:33 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:35.319 09:52:33 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:35.319 09:52:33 -- setup/devices.sh@161 -- # break 00:05:35.319 09:52:33 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:35.319 09:52:33 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:35.319 09:52:33 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:35.319 09:52:33 -- setup/devices.sh@166 -- # dm=dm-0 00:05:35.319 09:52:33 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:35.319 09:52:33 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:35.319 09:52:33 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.319 09:52:33 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:35.319 09:52:33 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.319 09:52:33 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:35.319 09:52:33 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:35.319 09:52:33 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.319 09:52:33 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:35.319 09:52:33 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:35.319 09:52:33 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:35.319 09:52:33 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.319 09:52:33 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:35.319 09:52:33 -- setup/devices.sh@53 -- # local found=0 00:05:35.319 09:52:33 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:35.319 09:52:33 -- setup/devices.sh@56 -- # : 00:05:35.319 09:52:33 -- setup/devices.sh@59 -- # local pci status 00:05:35.319 09:52:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.319 09:52:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:35.319 09:52:33 -- setup/devices.sh@47 -- # setup output config 00:05:35.319 09:52:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.319 09:52:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:35.319 09:52:33 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:35.319 09:52:33 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:35.319 09:52:33 -- setup/devices.sh@63 -- # found=1 00:05:35.319 09:52:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.319 09:52:33 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:35.319 09:52:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.577 09:52:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:35.578 09:52:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.836 09:52:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:35.836 09:52:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.836 09:52:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.836 09:52:34 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:35.836 09:52:34 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.836 09:52:34 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:35.836 09:52:34 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:35.836 09:52:34 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:35.836 09:52:34 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:35.836 09:52:34 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:35.836 09:52:34 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:35.836 09:52:34 -- setup/devices.sh@50 -- # local mount_point= 00:05:35.836 09:52:34 -- setup/devices.sh@51 -- # local test_file= 00:05:35.836 09:52:34 -- setup/devices.sh@53 -- # local found=0 00:05:35.836 09:52:34 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:35.836 09:52:34 -- setup/devices.sh@59 -- # local pci status 00:05:35.836 09:52:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.836 09:52:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:35.836 09:52:34 -- setup/devices.sh@47 -- # setup output config 00:05:35.836 09:52:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.836 09:52:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:36.095 09:52:34 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:36.095 09:52:34 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:36.095 09:52:34 -- setup/devices.sh@63 -- # found=1 00:05:36.095 09:52:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.095 09:52:34 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:36.095 09:52:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.354 09:52:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:36.354 09:52:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.354 09:52:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:36.354 09:52:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.355 09:52:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:36.355 09:52:34 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:36.355 09:52:34 -- setup/devices.sh@68 -- # return 0 00:05:36.355 09:52:34 -- setup/devices.sh@187 -- # cleanup_dm 00:05:36.355 09:52:34 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:36.355 09:52:34 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:36.355 09:52:34 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:36.616 09:52:34 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:36.616 09:52:34 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:36.616 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:36.616 09:52:35 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:36.616 09:52:35 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:36.616 00:05:36.616 real 0m4.525s 00:05:36.616 user 0m0.660s 00:05:36.616 sys 0m0.793s 00:05:36.616 09:52:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.616 ************************************ 00:05:36.616 END TEST dm_mount 00:05:36.616 ************************************ 00:05:36.616 09:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:36.616 09:52:35 -- setup/devices.sh@1 -- # cleanup 00:05:36.616 09:52:35 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:36.616 09:52:35 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:36.616 09:52:35 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:36.616 09:52:35 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:36.616 09:52:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:36.616 09:52:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:36.874 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:36.874 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:36.874 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:36.874 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:36.874 09:52:35 -- setup/devices.sh@12 -- # cleanup_dm 00:05:36.874 09:52:35 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:36.874 09:52:35 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:36.874 09:52:35 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:36.874 09:52:35 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:36.874 09:52:35 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:36.874 09:52:35 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:36.874 00:05:36.874 real 0m10.608s 00:05:36.874 user 0m2.401s 00:05:36.874 sys 0m2.537s 00:05:36.874 09:52:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.874 09:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:36.874 ************************************ 00:05:36.874 END TEST devices 00:05:36.874 ************************************ 00:05:36.874 00:05:36.874 real 0m22.455s 00:05:36.874 user 0m7.709s 00:05:36.874 sys 0m9.113s 00:05:36.874 ************************************ 00:05:36.874 END TEST setup.sh 00:05:36.875 ************************************ 00:05:36.875 09:52:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.875 09:52:35 -- common/autotest_common.sh@10 -- # set +x 00:05:36.875 09:52:35 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:37.133 Hugepages 00:05:37.133 node hugesize free / total 00:05:37.133 node0 1048576kB 0 / 0 00:05:37.133 node0 2048kB 2048 / 2048 00:05:37.133 00:05:37.133 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:37.133 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:37.133 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:37.392 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:37.392 09:52:35 -- spdk/autotest.sh@128 -- # uname -s 00:05:37.392 09:52:35 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:37.392 09:52:35 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:37.392 09:52:35 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.960 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.219 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.219 09:52:36 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:39.152 09:52:37 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:39.152 09:52:37 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:39.152 09:52:37 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:39.152 09:52:37 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:39.152 09:52:37 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:39.152 09:52:37 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:39.152 09:52:37 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:39.152 09:52:37 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:39.152 09:52:37 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:39.152 09:52:37 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:39.152 09:52:37 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:39.152 09:52:37 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.669 Waiting for block devices as requested 00:05:39.669 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:39.669 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:39.669 09:52:38 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:39.669 09:52:38 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:39.669 09:52:38 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:39.669 09:52:38 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:39.669 09:52:38 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:39.669 09:52:38 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:39.669 09:52:38 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:39.669 09:52:38 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:39.669 09:52:38 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:39.669 09:52:38 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:39.669 09:52:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:39.669 09:52:38 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:39.669 09:52:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:39.669 09:52:38 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:39.669 09:52:38 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:39.669 09:52:38 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:39.930 09:52:38 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:39.930 09:52:38 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:39.930 09:52:38 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:39.930 09:52:38 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:39.930 09:52:38 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:39.930 09:52:38 -- common/autotest_common.sh@1552 -- # continue 00:05:39.930 09:52:38 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:39.930 09:52:38 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:39.930 09:52:38 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:39.930 09:52:38 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:39.930 09:52:38 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:39.930 09:52:38 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:39.930 09:52:38 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:39.930 09:52:38 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:39.930 09:52:38 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:39.930 09:52:38 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:39.930 09:52:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:39.930 09:52:38 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:39.930 09:52:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:39.931 09:52:38 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:39.931 09:52:38 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:39.931 09:52:38 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:39.931 09:52:38 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:39.931 09:52:38 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:39.931 09:52:38 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:39.931 09:52:38 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:39.931 09:52:38 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:39.931 09:52:38 -- common/autotest_common.sh@1552 -- # continue 00:05:39.931 09:52:38 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:39.931 09:52:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:39.931 09:52:38 -- common/autotest_common.sh@10 -- # set +x 00:05:39.931 09:52:38 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:39.931 09:52:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:39.931 09:52:38 -- common/autotest_common.sh@10 -- # set +x 00:05:39.931 09:52:38 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.498 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.498 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.757 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.757 09:52:39 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:40.757 09:52:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:40.757 09:52:39 -- common/autotest_common.sh@10 -- # set +x 00:05:40.757 09:52:39 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:40.757 09:52:39 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:40.757 09:52:39 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:40.757 09:52:39 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:40.757 09:52:39 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:40.757 09:52:39 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:40.757 09:52:39 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:40.757 09:52:39 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:40.757 09:52:39 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:40.757 09:52:39 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:40.757 09:52:39 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:40.757 09:52:39 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:40.757 09:52:39 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:40.757 09:52:39 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:40.757 09:52:39 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:40.757 09:52:39 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:40.757 09:52:39 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:40.757 09:52:39 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:40.757 09:52:39 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:40.757 09:52:39 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:40.757 09:52:39 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:40.757 09:52:39 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:40.757 09:52:39 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:40.757 09:52:39 -- common/autotest_common.sh@1588 -- # return 0 00:05:40.757 09:52:39 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:40.757 09:52:39 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:40.757 09:52:39 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:40.757 09:52:39 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:40.757 09:52:39 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:40.757 09:52:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:40.757 09:52:39 -- common/autotest_common.sh@10 -- # set +x 00:05:40.757 09:52:39 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:40.757 09:52:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.757 09:52:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.757 09:52:39 -- common/autotest_common.sh@10 -- # set +x 00:05:40.757 ************************************ 00:05:40.757 START TEST env 00:05:40.757 ************************************ 00:05:40.757 09:52:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:41.016 * Looking for test storage... 00:05:41.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:41.016 09:52:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:41.016 09:52:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:41.016 09:52:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:41.016 09:52:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:41.016 09:52:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:41.016 09:52:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:41.016 09:52:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:41.016 09:52:39 -- scripts/common.sh@335 -- # IFS=.-: 00:05:41.016 09:52:39 -- scripts/common.sh@335 -- # read -ra ver1 00:05:41.016 09:52:39 -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.016 09:52:39 -- scripts/common.sh@336 -- # read -ra ver2 00:05:41.016 09:52:39 -- scripts/common.sh@337 -- # local 'op=<' 00:05:41.016 09:52:39 -- scripts/common.sh@339 -- # ver1_l=2 00:05:41.016 09:52:39 -- scripts/common.sh@340 -- # ver2_l=1 00:05:41.016 09:52:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:41.016 09:52:39 -- scripts/common.sh@343 -- # case "$op" in 00:05:41.016 09:52:39 -- scripts/common.sh@344 -- # : 1 00:05:41.016 09:52:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:41.016 09:52:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.016 09:52:39 -- scripts/common.sh@364 -- # decimal 1 00:05:41.016 09:52:39 -- scripts/common.sh@352 -- # local d=1 00:05:41.016 09:52:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.016 09:52:39 -- scripts/common.sh@354 -- # echo 1 00:05:41.016 09:52:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:41.016 09:52:39 -- scripts/common.sh@365 -- # decimal 2 00:05:41.016 09:52:39 -- scripts/common.sh@352 -- # local d=2 00:05:41.016 09:52:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.016 09:52:39 -- scripts/common.sh@354 -- # echo 2 00:05:41.016 09:52:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:41.016 09:52:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:41.016 09:52:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:41.016 09:52:39 -- scripts/common.sh@367 -- # return 0 00:05:41.016 09:52:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.016 09:52:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:41.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.016 --rc genhtml_branch_coverage=1 00:05:41.016 --rc genhtml_function_coverage=1 00:05:41.016 --rc genhtml_legend=1 00:05:41.016 --rc geninfo_all_blocks=1 00:05:41.016 --rc geninfo_unexecuted_blocks=1 00:05:41.016 00:05:41.016 ' 00:05:41.016 09:52:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:41.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.016 --rc genhtml_branch_coverage=1 00:05:41.016 --rc genhtml_function_coverage=1 00:05:41.016 --rc genhtml_legend=1 00:05:41.016 --rc geninfo_all_blocks=1 00:05:41.016 --rc geninfo_unexecuted_blocks=1 00:05:41.016 00:05:41.016 ' 00:05:41.016 09:52:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:41.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.016 --rc genhtml_branch_coverage=1 00:05:41.016 --rc genhtml_function_coverage=1 00:05:41.016 --rc genhtml_legend=1 00:05:41.016 --rc geninfo_all_blocks=1 00:05:41.016 --rc geninfo_unexecuted_blocks=1 00:05:41.016 00:05:41.016 ' 00:05:41.016 09:52:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:41.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.016 --rc genhtml_branch_coverage=1 00:05:41.016 --rc genhtml_function_coverage=1 00:05:41.016 --rc genhtml_legend=1 00:05:41.016 --rc geninfo_all_blocks=1 00:05:41.016 --rc geninfo_unexecuted_blocks=1 00:05:41.016 00:05:41.016 ' 00:05:41.016 09:52:39 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:41.016 09:52:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.016 09:52:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.016 09:52:39 -- common/autotest_common.sh@10 -- # set +x 00:05:41.016 ************************************ 00:05:41.016 START TEST env_memory 00:05:41.016 ************************************ 00:05:41.016 09:52:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:41.016 00:05:41.016 00:05:41.016 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.016 http://cunit.sourceforge.net/ 00:05:41.016 00:05:41.016 00:05:41.016 Suite: memory 00:05:41.016 Test: alloc and free memory map ...[2024-12-16 09:52:39.573903] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:41.016 passed 00:05:41.016 Test: mem map translation ...[2024-12-16 09:52:39.605284] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:41.016 [2024-12-16 09:52:39.605329] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:41.016 [2024-12-16 09:52:39.605393] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:41.016 [2024-12-16 09:52:39.605406] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:41.275 passed 00:05:41.275 Test: mem map registration ...[2024-12-16 09:52:39.669576] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:41.275 [2024-12-16 09:52:39.669618] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:41.275 passed 00:05:41.275 Test: mem map adjacent registrations ...passed 00:05:41.275 00:05:41.275 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.275 suites 1 1 n/a 0 0 00:05:41.275 tests 4 4 4 0 0 00:05:41.275 asserts 152 152 152 0 n/a 00:05:41.275 00:05:41.275 Elapsed time = 0.213 seconds 00:05:41.275 00:05:41.275 real 0m0.230s 00:05:41.275 user 0m0.212s 00:05:41.275 sys 0m0.013s 00:05:41.275 09:52:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.275 09:52:39 -- common/autotest_common.sh@10 -- # set +x 00:05:41.275 ************************************ 00:05:41.275 END TEST env_memory 00:05:41.275 ************************************ 00:05:41.275 09:52:39 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:41.275 09:52:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.275 09:52:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.275 09:52:39 -- common/autotest_common.sh@10 -- # set +x 00:05:41.275 ************************************ 00:05:41.275 START TEST env_vtophys 00:05:41.275 ************************************ 00:05:41.275 09:52:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:41.275 EAL: lib.eal log level changed from notice to debug 00:05:41.275 EAL: Detected lcore 0 as core 0 on socket 0 00:05:41.275 EAL: Detected lcore 1 as core 0 on socket 0 00:05:41.275 EAL: Detected lcore 2 as core 0 on socket 0 00:05:41.275 EAL: Detected lcore 3 as core 0 on socket 0 00:05:41.275 EAL: Detected lcore 4 as core 0 on socket 0 00:05:41.275 EAL: Detected lcore 5 as core 0 on socket 0 00:05:41.275 EAL: Detected lcore 6 as core 0 on socket 0 00:05:41.275 EAL: Detected lcore 7 as core 0 on socket 0 00:05:41.275 EAL: Detected lcore 8 as core 0 on socket 0 00:05:41.275 EAL: Detected lcore 9 as core 0 on socket 0 00:05:41.275 EAL: Maximum logical cores by configuration: 128 00:05:41.275 EAL: Detected CPU lcores: 10 00:05:41.275 EAL: Detected NUMA nodes: 1 00:05:41.275 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:41.275 EAL: Detected shared linkage of DPDK 00:05:41.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:41.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:41.275 EAL: Registered [vdev] bus. 00:05:41.275 EAL: bus.vdev log level changed from disabled to notice 00:05:41.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:41.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:41.275 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:41.275 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:41.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:41.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:41.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:41.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:41.275 EAL: No shared files mode enabled, IPC will be disabled 00:05:41.275 EAL: No shared files mode enabled, IPC is disabled 00:05:41.275 EAL: Selected IOVA mode 'PA' 00:05:41.275 EAL: Probing VFIO support... 00:05:41.275 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:41.275 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:41.275 EAL: Ask a virtual area of 0x2e000 bytes 00:05:41.275 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:41.275 EAL: Setting up physically contiguous memory... 00:05:41.275 EAL: Setting maximum number of open files to 524288 00:05:41.275 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:41.275 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:41.275 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.275 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:41.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.275 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.275 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:41.275 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:41.275 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.275 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:41.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.275 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.275 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:41.275 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:41.275 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.276 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:41.276 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.276 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.276 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:41.276 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:41.276 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.276 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:41.276 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.276 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.276 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:41.276 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:41.276 EAL: Hugepages will be freed exactly as allocated. 00:05:41.276 EAL: No shared files mode enabled, IPC is disabled 00:05:41.276 EAL: No shared files mode enabled, IPC is disabled 00:05:41.534 EAL: TSC frequency is ~2200000 KHz 00:05:41.534 EAL: Main lcore 0 is ready (tid=7f916a213a00;cpuset=[0]) 00:05:41.534 EAL: Trying to obtain current memory policy. 00:05:41.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.534 EAL: Restoring previous memory policy: 0 00:05:41.534 EAL: request: mp_malloc_sync 00:05:41.534 EAL: No shared files mode enabled, IPC is disabled 00:05:41.534 EAL: Heap on socket 0 was expanded by 2MB 00:05:41.534 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:41.534 EAL: No shared files mode enabled, IPC is disabled 00:05:41.534 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:41.534 EAL: Mem event callback 'spdk:(nil)' registered 00:05:41.534 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:41.534 00:05:41.534 00:05:41.534 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.534 http://cunit.sourceforge.net/ 00:05:41.534 00:05:41.534 00:05:41.534 Suite: components_suite 00:05:41.534 Test: vtophys_malloc_test ...passed 00:05:41.534 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:41.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.534 EAL: Restoring previous memory policy: 4 00:05:41.534 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.534 EAL: request: mp_malloc_sync 00:05:41.534 EAL: No shared files mode enabled, IPC is disabled 00:05:41.535 EAL: Heap on socket 0 was expanded by 4MB 00:05:41.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.535 EAL: request: mp_malloc_sync 00:05:41.535 EAL: No shared files mode enabled, IPC is disabled 00:05:41.535 EAL: Heap on socket 0 was shrunk by 4MB 00:05:41.535 EAL: Trying to obtain current memory policy. 00:05:41.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.535 EAL: Restoring previous memory policy: 4 00:05:41.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.535 EAL: request: mp_malloc_sync 00:05:41.535 EAL: No shared files mode enabled, IPC is disabled 00:05:41.535 EAL: Heap on socket 0 was expanded by 6MB 00:05:41.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.535 EAL: request: mp_malloc_sync 00:05:41.535 EAL: No shared files mode enabled, IPC is disabled 00:05:41.535 EAL: Heap on socket 0 was shrunk by 6MB 00:05:41.535 EAL: Trying to obtain current memory policy. 00:05:41.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.535 EAL: Restoring previous memory policy: 4 00:05:41.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.535 EAL: request: mp_malloc_sync 00:05:41.535 EAL: No shared files mode enabled, IPC is disabled 00:05:41.535 EAL: Heap on socket 0 was expanded by 10MB 00:05:41.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.535 EAL: request: mp_malloc_sync 00:05:41.535 EAL: No shared files mode enabled, IPC is disabled 00:05:41.535 EAL: Heap on socket 0 was shrunk by 10MB 00:05:41.535 EAL: Trying to obtain current memory policy. 00:05:41.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.535 EAL: Restoring previous memory policy: 4 00:05:41.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.535 EAL: request: mp_malloc_sync 00:05:41.535 EAL: No shared files mode enabled, IPC is disabled 00:05:41.535 EAL: Heap on socket 0 was expanded by 18MB 00:05:41.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.535 EAL: request: mp_malloc_sync 00:05:41.535 EAL: No shared files mode enabled, IPC is disabled 00:05:41.535 EAL: Heap on socket 0 was shrunk by 18MB 00:05:41.535 EAL: Trying to obtain current memory policy. 00:05:41.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.535 EAL: Restoring previous memory policy: 4 00:05:41.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.535 EAL: request: mp_malloc_sync 00:05:41.535 EAL: No shared files mode enabled, IPC is disabled 00:05:41.535 EAL: Heap on socket 0 was expanded by 34MB 00:05:41.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.535 EAL: request: mp_malloc_sync 00:05:41.535 EAL: No shared files mode enabled, IPC is disabled 00:05:41.535 EAL: Heap on socket 0 was shrunk by 34MB 00:05:41.535 EAL: Trying to obtain current memory policy. 00:05:41.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.535 EAL: Restoring previous memory policy: 4 00:05:41.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.535 EAL: request: mp_malloc_sync 00:05:41.535 EAL: No shared files mode enabled, IPC is disabled 00:05:41.535 EAL: Heap on socket 0 was expanded by 66MB 00:05:41.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.535 EAL: request: mp_malloc_sync 00:05:41.535 EAL: No shared files mode enabled, IPC is disabled 00:05:41.535 EAL: Heap on socket 0 was shrunk by 66MB 00:05:41.535 EAL: Trying to obtain current memory policy. 00:05:41.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.535 EAL: Restoring previous memory policy: 4 00:05:41.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.535 EAL: request: mp_malloc_sync 00:05:41.535 EAL: No shared files mode enabled, IPC is disabled 00:05:41.535 EAL: Heap on socket 0 was expanded by 130MB 00:05:41.535 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.535 EAL: request: mp_malloc_sync 00:05:41.535 EAL: No shared files mode enabled, IPC is disabled 00:05:41.535 EAL: Heap on socket 0 was shrunk by 130MB 00:05:41.535 EAL: Trying to obtain current memory policy. 00:05:41.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.794 EAL: Restoring previous memory policy: 4 00:05:41.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.794 EAL: request: mp_malloc_sync 00:05:41.794 EAL: No shared files mode enabled, IPC is disabled 00:05:41.794 EAL: Heap on socket 0 was expanded by 258MB 00:05:41.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.794 EAL: request: mp_malloc_sync 00:05:41.794 EAL: No shared files mode enabled, IPC is disabled 00:05:41.794 EAL: Heap on socket 0 was shrunk by 258MB 00:05:41.794 EAL: Trying to obtain current memory policy. 00:05:41.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.794 EAL: Restoring previous memory policy: 4 00:05:41.794 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.794 EAL: request: mp_malloc_sync 00:05:41.794 EAL: No shared files mode enabled, IPC is disabled 00:05:41.794 EAL: Heap on socket 0 was expanded by 514MB 00:05:42.053 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.053 EAL: request: mp_malloc_sync 00:05:42.053 EAL: No shared files mode enabled, IPC is disabled 00:05:42.053 EAL: Heap on socket 0 was shrunk by 514MB 00:05:42.053 EAL: Trying to obtain current memory policy. 00:05:42.053 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.311 EAL: Restoring previous memory policy: 4 00:05:42.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.311 EAL: request: mp_malloc_sync 00:05:42.311 EAL: No shared files mode enabled, IPC is disabled 00:05:42.311 EAL: Heap on socket 0 was expanded by 1026MB 00:05:42.569 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.828 passed 00:05:42.828 00:05:42.828 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.828 suites 1 1 n/a 0 0 00:05:42.828 tests 2 2 2 0 0 00:05:42.828 asserts 5218 5218 5218 0 n/a 00:05:42.828 00:05:42.828 Elapsed time = 1.221 seconds 00:05:42.828 EAL: request: mp_malloc_sync 00:05:42.828 EAL: No shared files mode enabled, IPC is disabled 00:05:42.828 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:42.828 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.828 EAL: request: mp_malloc_sync 00:05:42.828 EAL: No shared files mode enabled, IPC is disabled 00:05:42.828 EAL: Heap on socket 0 was shrunk by 2MB 00:05:42.828 EAL: No shared files mode enabled, IPC is disabled 00:05:42.828 EAL: No shared files mode enabled, IPC is disabled 00:05:42.828 EAL: No shared files mode enabled, IPC is disabled 00:05:42.828 00:05:42.828 real 0m1.413s 00:05:42.828 user 0m0.779s 00:05:42.828 sys 0m0.506s 00:05:42.828 09:52:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.828 09:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:42.828 ************************************ 00:05:42.828 END TEST env_vtophys 00:05:42.828 ************************************ 00:05:42.828 09:52:41 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:42.828 09:52:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.828 09:52:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.828 09:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:42.828 ************************************ 00:05:42.828 START TEST env_pci 00:05:42.828 ************************************ 00:05:42.828 09:52:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:42.828 00:05:42.828 00:05:42.828 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.828 http://cunit.sourceforge.net/ 00:05:42.828 00:05:42.828 00:05:42.828 Suite: pci 00:05:42.828 Test: pci_hook ...[2024-12-16 09:52:41.287934] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67494 has claimed it 00:05:42.828 passed 00:05:42.828 00:05:42.828 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.828 suites 1 1 n/a 0 0 00:05:42.828 tests 1 1 1 0 0 00:05:42.828 asserts 25 25 25 0 n/a 00:05:42.828 00:05:42.828 Elapsed time = 0.002 seconds 00:05:42.828 EAL: Cannot find device (10000:00:01.0) 00:05:42.828 EAL: Failed to attach device on primary process 00:05:42.828 00:05:42.828 real 0m0.016s 00:05:42.828 user 0m0.007s 00:05:42.828 sys 0m0.009s 00:05:42.828 09:52:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.828 09:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:42.828 ************************************ 00:05:42.828 END TEST env_pci 00:05:42.828 ************************************ 00:05:42.828 09:52:41 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:42.828 09:52:41 -- env/env.sh@15 -- # uname 00:05:42.828 09:52:41 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:42.828 09:52:41 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:42.828 09:52:41 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:42.828 09:52:41 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:42.828 09:52:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.828 09:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:42.828 ************************************ 00:05:42.828 START TEST env_dpdk_post_init 00:05:42.828 ************************************ 00:05:42.828 09:52:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:42.828 EAL: Detected CPU lcores: 10 00:05:42.828 EAL: Detected NUMA nodes: 1 00:05:42.828 EAL: Detected shared linkage of DPDK 00:05:42.828 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:42.828 EAL: Selected IOVA mode 'PA' 00:05:43.087 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.087 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:43.087 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:43.087 Starting DPDK initialization... 00:05:43.087 Starting SPDK post initialization... 00:05:43.087 SPDK NVMe probe 00:05:43.087 Attaching to 0000:00:06.0 00:05:43.087 Attaching to 0000:00:07.0 00:05:43.087 Attached to 0000:00:06.0 00:05:43.087 Attached to 0000:00:07.0 00:05:43.087 Cleaning up... 00:05:43.087 00:05:43.087 real 0m0.172s 00:05:43.087 user 0m0.041s 00:05:43.087 sys 0m0.032s 00:05:43.087 09:52:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.087 09:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.087 ************************************ 00:05:43.087 END TEST env_dpdk_post_init 00:05:43.087 ************************************ 00:05:43.087 09:52:41 -- env/env.sh@26 -- # uname 00:05:43.087 09:52:41 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:43.087 09:52:41 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.087 09:52:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.087 09:52:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.087 09:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.087 ************************************ 00:05:43.087 START TEST env_mem_callbacks 00:05:43.087 ************************************ 00:05:43.087 09:52:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.087 EAL: Detected CPU lcores: 10 00:05:43.087 EAL: Detected NUMA nodes: 1 00:05:43.087 EAL: Detected shared linkage of DPDK 00:05:43.087 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.087 EAL: Selected IOVA mode 'PA' 00:05:43.087 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.087 00:05:43.087 00:05:43.087 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.087 http://cunit.sourceforge.net/ 00:05:43.087 00:05:43.087 00:05:43.087 Suite: memory 00:05:43.087 Test: test ... 00:05:43.087 register 0x200000200000 2097152 00:05:43.087 malloc 3145728 00:05:43.087 register 0x200000400000 4194304 00:05:43.087 buf 0x200000500000 len 3145728 PASSED 00:05:43.087 malloc 64 00:05:43.087 buf 0x2000004fff40 len 64 PASSED 00:05:43.087 malloc 4194304 00:05:43.087 register 0x200000800000 6291456 00:05:43.087 buf 0x200000a00000 len 4194304 PASSED 00:05:43.087 free 0x200000500000 3145728 00:05:43.087 free 0x2000004fff40 64 00:05:43.087 unregister 0x200000400000 4194304 PASSED 00:05:43.087 free 0x200000a00000 4194304 00:05:43.087 unregister 0x200000800000 6291456 PASSED 00:05:43.346 malloc 8388608 00:05:43.346 register 0x200000400000 10485760 00:05:43.346 buf 0x200000600000 len 8388608 PASSED 00:05:43.346 free 0x200000600000 8388608 00:05:43.346 unregister 0x200000400000 10485760 PASSED 00:05:43.346 passed 00:05:43.346 00:05:43.346 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.346 suites 1 1 n/a 0 0 00:05:43.346 tests 1 1 1 0 0 00:05:43.346 asserts 15 15 15 0 n/a 00:05:43.346 00:05:43.346 Elapsed time = 0.009 seconds 00:05:43.346 00:05:43.346 real 0m0.139s 00:05:43.346 user 0m0.017s 00:05:43.346 sys 0m0.020s 00:05:43.346 09:52:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.346 09:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.346 ************************************ 00:05:43.346 END TEST env_mem_callbacks 00:05:43.346 ************************************ 00:05:43.346 00:05:43.346 real 0m2.420s 00:05:43.346 user 0m1.237s 00:05:43.346 sys 0m0.827s 00:05:43.346 09:52:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.346 09:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.346 ************************************ 00:05:43.346 END TEST env 00:05:43.346 ************************************ 00:05:43.346 09:52:41 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:43.346 09:52:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.346 09:52:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.346 09:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.346 ************************************ 00:05:43.346 START TEST rpc 00:05:43.346 ************************************ 00:05:43.346 09:52:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:43.346 * Looking for test storage... 00:05:43.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:43.346 09:52:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:43.346 09:52:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:43.346 09:52:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:43.605 09:52:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:43.605 09:52:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:43.605 09:52:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:43.605 09:52:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:43.605 09:52:41 -- scripts/common.sh@335 -- # IFS=.-: 00:05:43.605 09:52:41 -- scripts/common.sh@335 -- # read -ra ver1 00:05:43.605 09:52:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.605 09:52:41 -- scripts/common.sh@336 -- # read -ra ver2 00:05:43.605 09:52:41 -- scripts/common.sh@337 -- # local 'op=<' 00:05:43.605 09:52:41 -- scripts/common.sh@339 -- # ver1_l=2 00:05:43.605 09:52:41 -- scripts/common.sh@340 -- # ver2_l=1 00:05:43.605 09:52:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:43.605 09:52:41 -- scripts/common.sh@343 -- # case "$op" in 00:05:43.605 09:52:41 -- scripts/common.sh@344 -- # : 1 00:05:43.605 09:52:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:43.605 09:52:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.605 09:52:41 -- scripts/common.sh@364 -- # decimal 1 00:05:43.605 09:52:41 -- scripts/common.sh@352 -- # local d=1 00:05:43.605 09:52:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.605 09:52:41 -- scripts/common.sh@354 -- # echo 1 00:05:43.605 09:52:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:43.605 09:52:41 -- scripts/common.sh@365 -- # decimal 2 00:05:43.605 09:52:41 -- scripts/common.sh@352 -- # local d=2 00:05:43.605 09:52:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.605 09:52:41 -- scripts/common.sh@354 -- # echo 2 00:05:43.605 09:52:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:43.605 09:52:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:43.605 09:52:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:43.605 09:52:41 -- scripts/common.sh@367 -- # return 0 00:05:43.605 09:52:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.605 09:52:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:43.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.605 --rc genhtml_branch_coverage=1 00:05:43.605 --rc genhtml_function_coverage=1 00:05:43.605 --rc genhtml_legend=1 00:05:43.605 --rc geninfo_all_blocks=1 00:05:43.605 --rc geninfo_unexecuted_blocks=1 00:05:43.605 00:05:43.605 ' 00:05:43.605 09:52:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:43.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.605 --rc genhtml_branch_coverage=1 00:05:43.605 --rc genhtml_function_coverage=1 00:05:43.605 --rc genhtml_legend=1 00:05:43.605 --rc geninfo_all_blocks=1 00:05:43.605 --rc geninfo_unexecuted_blocks=1 00:05:43.605 00:05:43.605 ' 00:05:43.605 09:52:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:43.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.605 --rc genhtml_branch_coverage=1 00:05:43.605 --rc genhtml_function_coverage=1 00:05:43.605 --rc genhtml_legend=1 00:05:43.605 --rc geninfo_all_blocks=1 00:05:43.605 --rc geninfo_unexecuted_blocks=1 00:05:43.605 00:05:43.605 ' 00:05:43.605 09:52:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:43.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.605 --rc genhtml_branch_coverage=1 00:05:43.605 --rc genhtml_function_coverage=1 00:05:43.605 --rc genhtml_legend=1 00:05:43.605 --rc geninfo_all_blocks=1 00:05:43.605 --rc geninfo_unexecuted_blocks=1 00:05:43.605 00:05:43.605 ' 00:05:43.605 09:52:41 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:43.605 09:52:41 -- rpc/rpc.sh@65 -- # spdk_pid=67611 00:05:43.605 09:52:41 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.605 09:52:41 -- rpc/rpc.sh@67 -- # waitforlisten 67611 00:05:43.605 09:52:41 -- common/autotest_common.sh@829 -- # '[' -z 67611 ']' 00:05:43.605 09:52:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.605 09:52:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.605 09:52:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.605 09:52:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.605 09:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.605 [2024-12-16 09:52:42.056285] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.605 [2024-12-16 09:52:42.056429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67611 ] 00:05:43.605 [2024-12-16 09:52:42.197556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.864 [2024-12-16 09:52:42.262029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.864 [2024-12-16 09:52:42.262221] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:43.864 [2024-12-16 09:52:42.262238] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67611' to capture a snapshot of events at runtime. 00:05:43.864 [2024-12-16 09:52:42.262249] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67611 for offline analysis/debug. 00:05:43.864 [2024-12-16 09:52:42.262287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.799 09:52:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.799 09:52:43 -- common/autotest_common.sh@862 -- # return 0 00:05:44.799 09:52:43 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:44.799 09:52:43 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:44.799 09:52:43 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:44.799 09:52:43 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:44.799 09:52:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.799 09:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.799 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 ************************************ 00:05:44.799 START TEST rpc_integrity 00:05:44.799 ************************************ 00:05:44.799 09:52:43 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:44.799 09:52:43 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:44.799 09:52:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 09:52:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 09:52:43 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:44.799 09:52:43 -- rpc/rpc.sh@13 -- # jq length 00:05:44.799 09:52:43 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:44.799 09:52:43 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:44.799 09:52:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 09:52:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 09:52:43 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:44.799 09:52:43 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:44.799 09:52:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 09:52:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 09:52:43 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:44.799 { 00:05:44.799 "aliases": [ 00:05:44.799 "54bd27f8-509f-40ea-bf9f-2d8fd74dbfae" 00:05:44.799 ], 00:05:44.799 "assigned_rate_limits": { 00:05:44.799 "r_mbytes_per_sec": 0, 00:05:44.799 "rw_ios_per_sec": 0, 00:05:44.799 "rw_mbytes_per_sec": 0, 00:05:44.799 "w_mbytes_per_sec": 0 00:05:44.799 }, 00:05:44.799 "block_size": 512, 00:05:44.799 "claimed": false, 00:05:44.799 "driver_specific": {}, 00:05:44.799 "memory_domains": [ 00:05:44.799 { 00:05:44.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.799 "dma_device_type": 2 00:05:44.799 } 00:05:44.799 ], 00:05:44.799 "name": "Malloc0", 00:05:44.799 "num_blocks": 16384, 00:05:44.799 "product_name": "Malloc disk", 00:05:44.799 "supported_io_types": { 00:05:44.799 "abort": true, 00:05:44.799 "compare": false, 00:05:44.799 "compare_and_write": false, 00:05:44.799 "flush": true, 00:05:44.799 "nvme_admin": false, 00:05:44.799 "nvme_io": false, 00:05:44.799 "read": true, 00:05:44.799 "reset": true, 00:05:44.799 "unmap": true, 00:05:44.799 "write": true, 00:05:44.799 "write_zeroes": true 00:05:44.799 }, 00:05:44.799 "uuid": "54bd27f8-509f-40ea-bf9f-2d8fd74dbfae", 00:05:44.799 "zoned": false 00:05:44.799 } 00:05:44.799 ]' 00:05:44.799 09:52:43 -- rpc/rpc.sh@17 -- # jq length 00:05:44.799 09:52:43 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:44.799 09:52:43 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:44.799 09:52:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 [2024-12-16 09:52:43.233491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:44.799 [2024-12-16 09:52:43.233534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:44.799 [2024-12-16 09:52:43.233551] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2138b60 00:05:44.799 [2024-12-16 09:52:43.233559] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:44.799 [2024-12-16 09:52:43.234963] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:44.799 [2024-12-16 09:52:43.234994] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:44.799 Passthru0 00:05:44.799 09:52:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 09:52:43 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:44.799 09:52:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 09:52:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 09:52:43 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:44.799 { 00:05:44.799 "aliases": [ 00:05:44.799 "54bd27f8-509f-40ea-bf9f-2d8fd74dbfae" 00:05:44.799 ], 00:05:44.799 "assigned_rate_limits": { 00:05:44.799 "r_mbytes_per_sec": 0, 00:05:44.799 "rw_ios_per_sec": 0, 00:05:44.799 "rw_mbytes_per_sec": 0, 00:05:44.799 "w_mbytes_per_sec": 0 00:05:44.799 }, 00:05:44.799 "block_size": 512, 00:05:44.799 "claim_type": "exclusive_write", 00:05:44.799 "claimed": true, 00:05:44.799 "driver_specific": {}, 00:05:44.799 "memory_domains": [ 00:05:44.799 { 00:05:44.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.799 "dma_device_type": 2 00:05:44.799 } 00:05:44.799 ], 00:05:44.799 "name": "Malloc0", 00:05:44.799 "num_blocks": 16384, 00:05:44.799 "product_name": "Malloc disk", 00:05:44.799 "supported_io_types": { 00:05:44.799 "abort": true, 00:05:44.799 "compare": false, 00:05:44.799 "compare_and_write": false, 00:05:44.799 "flush": true, 00:05:44.799 "nvme_admin": false, 00:05:44.799 "nvme_io": false, 00:05:44.799 "read": true, 00:05:44.799 "reset": true, 00:05:44.799 "unmap": true, 00:05:44.799 "write": true, 00:05:44.799 "write_zeroes": true 00:05:44.799 }, 00:05:44.799 "uuid": "54bd27f8-509f-40ea-bf9f-2d8fd74dbfae", 00:05:44.799 "zoned": false 00:05:44.799 }, 00:05:44.799 { 00:05:44.799 "aliases": [ 00:05:44.799 "32b5de1a-8870-533b-97b9-3bad020eb401" 00:05:44.799 ], 00:05:44.799 "assigned_rate_limits": { 00:05:44.799 "r_mbytes_per_sec": 0, 00:05:44.799 "rw_ios_per_sec": 0, 00:05:44.799 "rw_mbytes_per_sec": 0, 00:05:44.799 "w_mbytes_per_sec": 0 00:05:44.799 }, 00:05:44.799 "block_size": 512, 00:05:44.799 "claimed": false, 00:05:44.799 "driver_specific": { 00:05:44.799 "passthru": { 00:05:44.799 "base_bdev_name": "Malloc0", 00:05:44.799 "name": "Passthru0" 00:05:44.799 } 00:05:44.799 }, 00:05:44.799 "memory_domains": [ 00:05:44.799 { 00:05:44.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.799 "dma_device_type": 2 00:05:44.799 } 00:05:44.799 ], 00:05:44.799 "name": "Passthru0", 00:05:44.799 "num_blocks": 16384, 00:05:44.799 "product_name": "passthru", 00:05:44.799 "supported_io_types": { 00:05:44.799 "abort": true, 00:05:44.799 "compare": false, 00:05:44.799 "compare_and_write": false, 00:05:44.799 "flush": true, 00:05:44.799 "nvme_admin": false, 00:05:44.799 "nvme_io": false, 00:05:44.799 "read": true, 00:05:44.799 "reset": true, 00:05:44.799 "unmap": true, 00:05:44.799 "write": true, 00:05:44.799 "write_zeroes": true 00:05:44.799 }, 00:05:44.799 "uuid": "32b5de1a-8870-533b-97b9-3bad020eb401", 00:05:44.799 "zoned": false 00:05:44.799 } 00:05:44.799 ]' 00:05:44.799 09:52:43 -- rpc/rpc.sh@21 -- # jq length 00:05:44.799 09:52:43 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:44.799 09:52:43 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:44.799 09:52:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 09:52:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 09:52:43 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:44.799 09:52:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.799 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.799 09:52:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.799 09:52:43 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:44.800 09:52:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.800 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.800 09:52:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.800 09:52:43 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:44.800 09:52:43 -- rpc/rpc.sh@26 -- # jq length 00:05:44.800 09:52:43 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:44.800 00:05:44.800 real 0m0.310s 00:05:44.800 user 0m0.198s 00:05:44.800 sys 0m0.036s 00:05:44.800 09:52:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.800 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.800 ************************************ 00:05:44.800 END TEST rpc_integrity 00:05:44.800 ************************************ 00:05:45.058 09:52:43 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:45.058 09:52:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.058 09:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.058 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.058 ************************************ 00:05:45.058 START TEST rpc_plugins 00:05:45.058 ************************************ 00:05:45.058 09:52:43 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:45.058 09:52:43 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:45.058 09:52:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.058 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.058 09:52:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.058 09:52:43 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:45.058 09:52:43 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:45.058 09:52:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.058 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.058 09:52:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.058 09:52:43 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:45.058 { 00:05:45.058 "aliases": [ 00:05:45.058 "76ff33b4-e7db-49e8-b357-772bfa66ef66" 00:05:45.058 ], 00:05:45.058 "assigned_rate_limits": { 00:05:45.058 "r_mbytes_per_sec": 0, 00:05:45.058 "rw_ios_per_sec": 0, 00:05:45.058 "rw_mbytes_per_sec": 0, 00:05:45.058 "w_mbytes_per_sec": 0 00:05:45.058 }, 00:05:45.058 "block_size": 4096, 00:05:45.058 "claimed": false, 00:05:45.058 "driver_specific": {}, 00:05:45.058 "memory_domains": [ 00:05:45.058 { 00:05:45.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.058 "dma_device_type": 2 00:05:45.058 } 00:05:45.058 ], 00:05:45.058 "name": "Malloc1", 00:05:45.058 "num_blocks": 256, 00:05:45.058 "product_name": "Malloc disk", 00:05:45.058 "supported_io_types": { 00:05:45.058 "abort": true, 00:05:45.058 "compare": false, 00:05:45.058 "compare_and_write": false, 00:05:45.058 "flush": true, 00:05:45.058 "nvme_admin": false, 00:05:45.058 "nvme_io": false, 00:05:45.058 "read": true, 00:05:45.058 "reset": true, 00:05:45.058 "unmap": true, 00:05:45.058 "write": true, 00:05:45.058 "write_zeroes": true 00:05:45.058 }, 00:05:45.058 "uuid": "76ff33b4-e7db-49e8-b357-772bfa66ef66", 00:05:45.058 "zoned": false 00:05:45.058 } 00:05:45.058 ]' 00:05:45.058 09:52:43 -- rpc/rpc.sh@32 -- # jq length 00:05:45.058 09:52:43 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:45.058 09:52:43 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:45.058 09:52:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.058 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.058 09:52:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.058 09:52:43 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:45.058 09:52:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.058 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.058 09:52:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.058 09:52:43 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:45.058 09:52:43 -- rpc/rpc.sh@36 -- # jq length 00:05:45.058 09:52:43 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:45.058 00:05:45.058 real 0m0.156s 00:05:45.058 user 0m0.106s 00:05:45.058 sys 0m0.016s 00:05:45.058 09:52:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.058 ************************************ 00:05:45.058 END TEST rpc_plugins 00:05:45.058 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.058 ************************************ 00:05:45.058 09:52:43 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:45.058 09:52:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.058 09:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.058 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.058 ************************************ 00:05:45.058 START TEST rpc_trace_cmd_test 00:05:45.058 ************************************ 00:05:45.058 09:52:43 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:45.058 09:52:43 -- rpc/rpc.sh@40 -- # local info 00:05:45.058 09:52:43 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:45.058 09:52:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.058 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.058 09:52:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.058 09:52:43 -- rpc/rpc.sh@42 -- # info='{ 00:05:45.058 "bdev": { 00:05:45.058 "mask": "0x8", 00:05:45.058 "tpoint_mask": "0xffffffffffffffff" 00:05:45.058 }, 00:05:45.059 "bdev_nvme": { 00:05:45.059 "mask": "0x4000", 00:05:45.059 "tpoint_mask": "0x0" 00:05:45.059 }, 00:05:45.059 "blobfs": { 00:05:45.059 "mask": "0x80", 00:05:45.059 "tpoint_mask": "0x0" 00:05:45.059 }, 00:05:45.059 "dsa": { 00:05:45.059 "mask": "0x200", 00:05:45.059 "tpoint_mask": "0x0" 00:05:45.059 }, 00:05:45.059 "ftl": { 00:05:45.059 "mask": "0x40", 00:05:45.059 "tpoint_mask": "0x0" 00:05:45.059 }, 00:05:45.059 "iaa": { 00:05:45.059 "mask": "0x1000", 00:05:45.059 "tpoint_mask": "0x0" 00:05:45.059 }, 00:05:45.059 "iscsi_conn": { 00:05:45.059 "mask": "0x2", 00:05:45.059 "tpoint_mask": "0x0" 00:05:45.059 }, 00:05:45.059 "nvme_pcie": { 00:05:45.059 "mask": "0x800", 00:05:45.059 "tpoint_mask": "0x0" 00:05:45.059 }, 00:05:45.059 "nvme_tcp": { 00:05:45.059 "mask": "0x2000", 00:05:45.059 "tpoint_mask": "0x0" 00:05:45.059 }, 00:05:45.059 "nvmf_rdma": { 00:05:45.059 "mask": "0x10", 00:05:45.059 "tpoint_mask": "0x0" 00:05:45.059 }, 00:05:45.059 "nvmf_tcp": { 00:05:45.059 "mask": "0x20", 00:05:45.059 "tpoint_mask": "0x0" 00:05:45.059 }, 00:05:45.059 "scsi": { 00:05:45.059 "mask": "0x4", 00:05:45.059 "tpoint_mask": "0x0" 00:05:45.059 }, 00:05:45.059 "thread": { 00:05:45.059 "mask": "0x400", 00:05:45.059 "tpoint_mask": "0x0" 00:05:45.059 }, 00:05:45.059 "tpoint_group_mask": "0x8", 00:05:45.059 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67611" 00:05:45.059 }' 00:05:45.059 09:52:43 -- rpc/rpc.sh@43 -- # jq length 00:05:45.317 09:52:43 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:45.317 09:52:43 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:45.317 09:52:43 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:45.317 09:52:43 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:45.317 09:52:43 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:45.317 09:52:43 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:45.317 09:52:43 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:45.317 09:52:43 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:45.317 09:52:43 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:45.317 00:05:45.317 real 0m0.229s 00:05:45.317 user 0m0.191s 00:05:45.317 sys 0m0.029s 00:05:45.317 09:52:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.317 ************************************ 00:05:45.317 END TEST rpc_trace_cmd_test 00:05:45.317 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.317 ************************************ 00:05:45.318 09:52:43 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:45.318 09:52:43 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:45.318 09:52:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.318 09:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.318 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.318 ************************************ 00:05:45.318 START TEST go_rpc 00:05:45.318 ************************************ 00:05:45.318 09:52:43 -- common/autotest_common.sh@1114 -- # go_rpc 00:05:45.318 09:52:43 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:45.318 09:52:43 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:45.318 09:52:43 -- rpc/rpc.sh@52 -- # jq length 00:05:45.576 09:52:43 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:45.576 09:52:43 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.576 09:52:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.576 09:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.576 09:52:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.576 09:52:43 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:45.576 09:52:44 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:45.576 09:52:44 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["3decfa2e-96ce-41bd-8715-555d5458113c"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"3decfa2e-96ce-41bd-8715-555d5458113c","zoned":false}]' 00:05:45.576 09:52:44 -- rpc/rpc.sh@57 -- # jq length 00:05:45.576 09:52:44 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:45.576 09:52:44 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:45.576 09:52:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.576 09:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:45.576 09:52:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.576 09:52:44 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:45.576 09:52:44 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:45.576 09:52:44 -- rpc/rpc.sh@61 -- # jq length 00:05:45.576 09:52:44 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:45.576 00:05:45.576 real 0m0.223s 00:05:45.576 user 0m0.150s 00:05:45.576 sys 0m0.039s 00:05:45.576 09:52:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.576 ************************************ 00:05:45.576 END TEST go_rpc 00:05:45.576 09:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:45.576 ************************************ 00:05:45.576 09:52:44 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:45.576 09:52:44 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:45.576 09:52:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.576 09:52:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.576 09:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:45.576 ************************************ 00:05:45.576 START TEST rpc_daemon_integrity 00:05:45.576 ************************************ 00:05:45.576 09:52:44 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:45.576 09:52:44 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.576 09:52:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.576 09:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:45.835 09:52:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.835 09:52:44 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.835 09:52:44 -- rpc/rpc.sh@13 -- # jq length 00:05:45.835 09:52:44 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.835 09:52:44 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.835 09:52:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.835 09:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:45.835 09:52:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.835 09:52:44 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:45.835 09:52:44 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.835 09:52:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.835 09:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:45.835 09:52:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.835 09:52:44 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.835 { 00:05:45.835 "aliases": [ 00:05:45.835 "76445b93-273a-42c1-b16b-da10c1e1b25d" 00:05:45.835 ], 00:05:45.835 "assigned_rate_limits": { 00:05:45.835 "r_mbytes_per_sec": 0, 00:05:45.835 "rw_ios_per_sec": 0, 00:05:45.835 "rw_mbytes_per_sec": 0, 00:05:45.835 "w_mbytes_per_sec": 0 00:05:45.835 }, 00:05:45.835 "block_size": 512, 00:05:45.835 "claimed": false, 00:05:45.835 "driver_specific": {}, 00:05:45.835 "memory_domains": [ 00:05:45.835 { 00:05:45.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.835 "dma_device_type": 2 00:05:45.835 } 00:05:45.835 ], 00:05:45.835 "name": "Malloc3", 00:05:45.835 "num_blocks": 16384, 00:05:45.835 "product_name": "Malloc disk", 00:05:45.835 "supported_io_types": { 00:05:45.835 "abort": true, 00:05:45.835 "compare": false, 00:05:45.835 "compare_and_write": false, 00:05:45.835 "flush": true, 00:05:45.835 "nvme_admin": false, 00:05:45.835 "nvme_io": false, 00:05:45.835 "read": true, 00:05:45.835 "reset": true, 00:05:45.835 "unmap": true, 00:05:45.835 "write": true, 00:05:45.835 "write_zeroes": true 00:05:45.835 }, 00:05:45.835 "uuid": "76445b93-273a-42c1-b16b-da10c1e1b25d", 00:05:45.835 "zoned": false 00:05:45.835 } 00:05:45.835 ]' 00:05:45.835 09:52:44 -- rpc/rpc.sh@17 -- # jq length 00:05:45.835 09:52:44 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.835 09:52:44 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:45.835 09:52:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.835 09:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:45.835 [2024-12-16 09:52:44.354322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:45.835 [2024-12-16 09:52:44.354408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.835 [2024-12-16 09:52:44.354427] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x213a990 00:05:45.835 [2024-12-16 09:52:44.354436] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.835 [2024-12-16 09:52:44.355707] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.835 [2024-12-16 09:52:44.355782] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.835 Passthru0 00:05:45.835 09:52:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.835 09:52:44 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.835 09:52:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.835 09:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:45.835 09:52:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.835 09:52:44 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.835 { 00:05:45.835 "aliases": [ 00:05:45.835 "76445b93-273a-42c1-b16b-da10c1e1b25d" 00:05:45.835 ], 00:05:45.835 "assigned_rate_limits": { 00:05:45.835 "r_mbytes_per_sec": 0, 00:05:45.835 "rw_ios_per_sec": 0, 00:05:45.836 "rw_mbytes_per_sec": 0, 00:05:45.836 "w_mbytes_per_sec": 0 00:05:45.836 }, 00:05:45.836 "block_size": 512, 00:05:45.836 "claim_type": "exclusive_write", 00:05:45.836 "claimed": true, 00:05:45.836 "driver_specific": {}, 00:05:45.836 "memory_domains": [ 00:05:45.836 { 00:05:45.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.836 "dma_device_type": 2 00:05:45.836 } 00:05:45.836 ], 00:05:45.836 "name": "Malloc3", 00:05:45.836 "num_blocks": 16384, 00:05:45.836 "product_name": "Malloc disk", 00:05:45.836 "supported_io_types": { 00:05:45.836 "abort": true, 00:05:45.836 "compare": false, 00:05:45.836 "compare_and_write": false, 00:05:45.836 "flush": true, 00:05:45.836 "nvme_admin": false, 00:05:45.836 "nvme_io": false, 00:05:45.836 "read": true, 00:05:45.836 "reset": true, 00:05:45.836 "unmap": true, 00:05:45.836 "write": true, 00:05:45.836 "write_zeroes": true 00:05:45.836 }, 00:05:45.836 "uuid": "76445b93-273a-42c1-b16b-da10c1e1b25d", 00:05:45.836 "zoned": false 00:05:45.836 }, 00:05:45.836 { 00:05:45.836 "aliases": [ 00:05:45.836 "e00cb335-3f59-5133-9aef-c67427ff1450" 00:05:45.836 ], 00:05:45.836 "assigned_rate_limits": { 00:05:45.836 "r_mbytes_per_sec": 0, 00:05:45.836 "rw_ios_per_sec": 0, 00:05:45.836 "rw_mbytes_per_sec": 0, 00:05:45.836 "w_mbytes_per_sec": 0 00:05:45.836 }, 00:05:45.836 "block_size": 512, 00:05:45.836 "claimed": false, 00:05:45.836 "driver_specific": { 00:05:45.836 "passthru": { 00:05:45.836 "base_bdev_name": "Malloc3", 00:05:45.836 "name": "Passthru0" 00:05:45.836 } 00:05:45.836 }, 00:05:45.836 "memory_domains": [ 00:05:45.836 { 00:05:45.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.836 "dma_device_type": 2 00:05:45.836 } 00:05:45.836 ], 00:05:45.836 "name": "Passthru0", 00:05:45.836 "num_blocks": 16384, 00:05:45.836 "product_name": "passthru", 00:05:45.836 "supported_io_types": { 00:05:45.836 "abort": true, 00:05:45.836 "compare": false, 00:05:45.836 "compare_and_write": false, 00:05:45.836 "flush": true, 00:05:45.836 "nvme_admin": false, 00:05:45.836 "nvme_io": false, 00:05:45.836 "read": true, 00:05:45.836 "reset": true, 00:05:45.836 "unmap": true, 00:05:45.836 "write": true, 00:05:45.836 "write_zeroes": true 00:05:45.836 }, 00:05:45.836 "uuid": "e00cb335-3f59-5133-9aef-c67427ff1450", 00:05:45.836 "zoned": false 00:05:45.836 } 00:05:45.836 ]' 00:05:45.836 09:52:44 -- rpc/rpc.sh@21 -- # jq length 00:05:45.836 09:52:44 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.836 09:52:44 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.836 09:52:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.836 09:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:45.836 09:52:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.836 09:52:44 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:45.836 09:52:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.836 09:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:45.836 09:52:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.836 09:52:44 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.836 09:52:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.836 09:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.094 09:52:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.094 09:52:44 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.094 09:52:44 -- rpc/rpc.sh@26 -- # jq length 00:05:46.094 09:52:44 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.094 00:05:46.094 real 0m0.319s 00:05:46.094 user 0m0.213s 00:05:46.094 sys 0m0.036s 00:05:46.094 09:52:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.094 ************************************ 00:05:46.094 END TEST rpc_daemon_integrity 00:05:46.094 09:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.094 ************************************ 00:05:46.095 09:52:44 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:46.095 09:52:44 -- rpc/rpc.sh@84 -- # killprocess 67611 00:05:46.095 09:52:44 -- common/autotest_common.sh@936 -- # '[' -z 67611 ']' 00:05:46.095 09:52:44 -- common/autotest_common.sh@940 -- # kill -0 67611 00:05:46.095 09:52:44 -- common/autotest_common.sh@941 -- # uname 00:05:46.095 09:52:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.095 09:52:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67611 00:05:46.095 09:52:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.095 09:52:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.095 killing process with pid 67611 00:05:46.095 09:52:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67611' 00:05:46.095 09:52:44 -- common/autotest_common.sh@955 -- # kill 67611 00:05:46.095 09:52:44 -- common/autotest_common.sh@960 -- # wait 67611 00:05:46.355 00:05:46.355 real 0m3.140s 00:05:46.355 user 0m4.076s 00:05:46.355 sys 0m0.788s 00:05:46.355 09:52:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.355 09:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.355 ************************************ 00:05:46.355 END TEST rpc 00:05:46.355 ************************************ 00:05:46.625 09:52:44 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:46.625 09:52:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.625 09:52:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.625 09:52:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.625 ************************************ 00:05:46.625 START TEST rpc_client 00:05:46.625 ************************************ 00:05:46.625 09:52:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:46.625 * Looking for test storage... 00:05:46.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:46.625 09:52:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:46.625 09:52:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:46.625 09:52:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:46.625 09:52:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:46.625 09:52:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:46.625 09:52:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:46.625 09:52:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:46.625 09:52:45 -- scripts/common.sh@335 -- # IFS=.-: 00:05:46.625 09:52:45 -- scripts/common.sh@335 -- # read -ra ver1 00:05:46.625 09:52:45 -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.625 09:52:45 -- scripts/common.sh@336 -- # read -ra ver2 00:05:46.625 09:52:45 -- scripts/common.sh@337 -- # local 'op=<' 00:05:46.625 09:52:45 -- scripts/common.sh@339 -- # ver1_l=2 00:05:46.625 09:52:45 -- scripts/common.sh@340 -- # ver2_l=1 00:05:46.625 09:52:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:46.625 09:52:45 -- scripts/common.sh@343 -- # case "$op" in 00:05:46.625 09:52:45 -- scripts/common.sh@344 -- # : 1 00:05:46.625 09:52:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:46.625 09:52:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.625 09:52:45 -- scripts/common.sh@364 -- # decimal 1 00:05:46.625 09:52:45 -- scripts/common.sh@352 -- # local d=1 00:05:46.625 09:52:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.625 09:52:45 -- scripts/common.sh@354 -- # echo 1 00:05:46.625 09:52:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:46.625 09:52:45 -- scripts/common.sh@365 -- # decimal 2 00:05:46.625 09:52:45 -- scripts/common.sh@352 -- # local d=2 00:05:46.625 09:52:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.625 09:52:45 -- scripts/common.sh@354 -- # echo 2 00:05:46.625 09:52:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:46.625 09:52:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:46.625 09:52:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:46.625 09:52:45 -- scripts/common.sh@367 -- # return 0 00:05:46.625 09:52:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.625 09:52:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:46.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.625 --rc genhtml_branch_coverage=1 00:05:46.625 --rc genhtml_function_coverage=1 00:05:46.625 --rc genhtml_legend=1 00:05:46.625 --rc geninfo_all_blocks=1 00:05:46.625 --rc geninfo_unexecuted_blocks=1 00:05:46.625 00:05:46.625 ' 00:05:46.625 09:52:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:46.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.625 --rc genhtml_branch_coverage=1 00:05:46.625 --rc genhtml_function_coverage=1 00:05:46.625 --rc genhtml_legend=1 00:05:46.625 --rc geninfo_all_blocks=1 00:05:46.625 --rc geninfo_unexecuted_blocks=1 00:05:46.625 00:05:46.625 ' 00:05:46.625 09:52:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:46.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.625 --rc genhtml_branch_coverage=1 00:05:46.625 --rc genhtml_function_coverage=1 00:05:46.625 --rc genhtml_legend=1 00:05:46.625 --rc geninfo_all_blocks=1 00:05:46.625 --rc geninfo_unexecuted_blocks=1 00:05:46.625 00:05:46.625 ' 00:05:46.625 09:52:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:46.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.625 --rc genhtml_branch_coverage=1 00:05:46.625 --rc genhtml_function_coverage=1 00:05:46.625 --rc genhtml_legend=1 00:05:46.625 --rc geninfo_all_blocks=1 00:05:46.625 --rc geninfo_unexecuted_blocks=1 00:05:46.625 00:05:46.625 ' 00:05:46.626 09:52:45 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:46.626 OK 00:05:46.626 09:52:45 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:46.626 00:05:46.626 real 0m0.206s 00:05:46.626 user 0m0.128s 00:05:46.626 sys 0m0.091s 00:05:46.626 09:52:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.626 09:52:45 -- common/autotest_common.sh@10 -- # set +x 00:05:46.626 ************************************ 00:05:46.626 END TEST rpc_client 00:05:46.626 ************************************ 00:05:46.906 09:52:45 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:46.906 09:52:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.906 09:52:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.906 09:52:45 -- common/autotest_common.sh@10 -- # set +x 00:05:46.906 ************************************ 00:05:46.906 START TEST json_config 00:05:46.906 ************************************ 00:05:46.906 09:52:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:46.906 09:52:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:46.906 09:52:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:46.906 09:52:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:46.906 09:52:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:46.906 09:52:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:46.906 09:52:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:46.906 09:52:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:46.906 09:52:45 -- scripts/common.sh@335 -- # IFS=.-: 00:05:46.906 09:52:45 -- scripts/common.sh@335 -- # read -ra ver1 00:05:46.906 09:52:45 -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.906 09:52:45 -- scripts/common.sh@336 -- # read -ra ver2 00:05:46.906 09:52:45 -- scripts/common.sh@337 -- # local 'op=<' 00:05:46.906 09:52:45 -- scripts/common.sh@339 -- # ver1_l=2 00:05:46.906 09:52:45 -- scripts/common.sh@340 -- # ver2_l=1 00:05:46.906 09:52:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:46.906 09:52:45 -- scripts/common.sh@343 -- # case "$op" in 00:05:46.906 09:52:45 -- scripts/common.sh@344 -- # : 1 00:05:46.906 09:52:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:46.906 09:52:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.906 09:52:45 -- scripts/common.sh@364 -- # decimal 1 00:05:46.906 09:52:45 -- scripts/common.sh@352 -- # local d=1 00:05:46.906 09:52:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.906 09:52:45 -- scripts/common.sh@354 -- # echo 1 00:05:46.906 09:52:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:46.906 09:52:45 -- scripts/common.sh@365 -- # decimal 2 00:05:46.906 09:52:45 -- scripts/common.sh@352 -- # local d=2 00:05:46.906 09:52:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.906 09:52:45 -- scripts/common.sh@354 -- # echo 2 00:05:46.906 09:52:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:46.906 09:52:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:46.906 09:52:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:46.906 09:52:45 -- scripts/common.sh@367 -- # return 0 00:05:46.906 09:52:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.906 09:52:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:46.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.906 --rc genhtml_branch_coverage=1 00:05:46.906 --rc genhtml_function_coverage=1 00:05:46.906 --rc genhtml_legend=1 00:05:46.906 --rc geninfo_all_blocks=1 00:05:46.906 --rc geninfo_unexecuted_blocks=1 00:05:46.906 00:05:46.906 ' 00:05:46.906 09:52:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:46.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.906 --rc genhtml_branch_coverage=1 00:05:46.906 --rc genhtml_function_coverage=1 00:05:46.906 --rc genhtml_legend=1 00:05:46.906 --rc geninfo_all_blocks=1 00:05:46.906 --rc geninfo_unexecuted_blocks=1 00:05:46.906 00:05:46.906 ' 00:05:46.906 09:52:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:46.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.906 --rc genhtml_branch_coverage=1 00:05:46.906 --rc genhtml_function_coverage=1 00:05:46.906 --rc genhtml_legend=1 00:05:46.906 --rc geninfo_all_blocks=1 00:05:46.906 --rc geninfo_unexecuted_blocks=1 00:05:46.906 00:05:46.906 ' 00:05:46.906 09:52:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:46.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.907 --rc genhtml_branch_coverage=1 00:05:46.907 --rc genhtml_function_coverage=1 00:05:46.907 --rc genhtml_legend=1 00:05:46.907 --rc geninfo_all_blocks=1 00:05:46.907 --rc geninfo_unexecuted_blocks=1 00:05:46.907 00:05:46.907 ' 00:05:46.907 09:52:45 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:46.907 09:52:45 -- nvmf/common.sh@7 -- # uname -s 00:05:46.907 09:52:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.907 09:52:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.907 09:52:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.907 09:52:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.907 09:52:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.907 09:52:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.907 09:52:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.907 09:52:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.907 09:52:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.907 09:52:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.907 09:52:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:05:46.907 09:52:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:05:46.907 09:52:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.907 09:52:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.907 09:52:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.907 09:52:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:46.907 09:52:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.907 09:52:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.907 09:52:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.907 09:52:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.907 09:52:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.907 09:52:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.907 09:52:45 -- paths/export.sh@5 -- # export PATH 00:05:46.907 09:52:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.907 09:52:45 -- nvmf/common.sh@46 -- # : 0 00:05:46.907 09:52:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:46.907 09:52:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:46.907 09:52:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:46.907 09:52:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.907 09:52:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.907 09:52:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:46.907 09:52:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:46.907 09:52:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:46.907 09:52:45 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:46.907 09:52:45 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:46.907 09:52:45 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:46.907 09:52:45 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:46.907 09:52:45 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:46.907 09:52:45 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:46.907 09:52:45 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:46.907 09:52:45 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:46.907 09:52:45 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:46.907 09:52:45 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:46.907 09:52:45 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:46.907 09:52:45 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:46.907 09:52:45 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:46.907 INFO: JSON configuration test init 00:05:46.907 09:52:45 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.907 09:52:45 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:46.907 09:52:45 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:46.907 09:52:45 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:46.907 09:52:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.907 09:52:45 -- common/autotest_common.sh@10 -- # set +x 00:05:46.907 09:52:45 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:46.907 09:52:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.907 09:52:45 -- common/autotest_common.sh@10 -- # set +x 00:05:46.907 Waiting for target to run... 00:05:46.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.907 09:52:45 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:46.907 09:52:45 -- json_config/json_config.sh@98 -- # local app=target 00:05:46.907 09:52:45 -- json_config/json_config.sh@99 -- # shift 00:05:46.907 09:52:45 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:46.907 09:52:45 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:46.907 09:52:45 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:46.907 09:52:45 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:46.907 09:52:45 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:46.907 09:52:45 -- json_config/json_config.sh@111 -- # app_pid[$app]=67936 00:05:46.907 09:52:45 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:46.907 09:52:45 -- json_config/json_config.sh@114 -- # waitforlisten 67936 /var/tmp/spdk_tgt.sock 00:05:46.907 09:52:45 -- common/autotest_common.sh@829 -- # '[' -z 67936 ']' 00:05:46.907 09:52:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.907 09:52:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.907 09:52:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.907 09:52:45 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:46.907 09:52:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.907 09:52:45 -- common/autotest_common.sh@10 -- # set +x 00:05:46.907 [2024-12-16 09:52:45.504850] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.907 [2024-12-16 09:52:45.504962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67936 ] 00:05:47.473 [2024-12-16 09:52:45.929761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.474 [2024-12-16 09:52:45.973796] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.474 [2024-12-16 09:52:45.973966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.040 09:52:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.040 09:52:46 -- common/autotest_common.sh@862 -- # return 0 00:05:48.040 00:05:48.040 09:52:46 -- json_config/json_config.sh@115 -- # echo '' 00:05:48.040 09:52:46 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:48.040 09:52:46 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:48.040 09:52:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:48.040 09:52:46 -- common/autotest_common.sh@10 -- # set +x 00:05:48.040 09:52:46 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:48.040 09:52:46 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:48.040 09:52:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:48.040 09:52:46 -- common/autotest_common.sh@10 -- # set +x 00:05:48.040 09:52:46 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:48.040 09:52:46 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:48.040 09:52:46 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:48.609 09:52:47 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:48.609 09:52:47 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:48.609 09:52:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:48.609 09:52:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.609 09:52:47 -- json_config/json_config.sh@48 -- # local ret=0 00:05:48.609 09:52:47 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:48.609 09:52:47 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:48.609 09:52:47 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:48.609 09:52:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:48.609 09:52:47 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:48.867 09:52:47 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:48.867 09:52:47 -- json_config/json_config.sh@51 -- # local get_types 00:05:48.867 09:52:47 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:48.867 09:52:47 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:48.867 09:52:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:48.867 09:52:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.867 09:52:47 -- json_config/json_config.sh@58 -- # return 0 00:05:48.867 09:52:47 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:48.867 09:52:47 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:48.867 09:52:47 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:48.867 09:52:47 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:48.867 09:52:47 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:48.867 09:52:47 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:48.867 09:52:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:48.867 09:52:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.867 09:52:47 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:48.867 09:52:47 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:48.867 09:52:47 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:48.867 09:52:47 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:48.867 09:52:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:49.126 MallocForNvmf0 00:05:49.126 09:52:47 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:49.126 09:52:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:49.384 MallocForNvmf1 00:05:49.384 09:52:47 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:49.384 09:52:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:49.643 [2024-12-16 09:52:48.039432] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.643 09:52:48 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:49.643 09:52:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:49.643 09:52:48 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:49.643 09:52:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:49.902 09:52:48 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:49.902 09:52:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:50.160 09:52:48 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:50.160 09:52:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:50.419 [2024-12-16 09:52:48.915962] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:50.419 09:52:48 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:50.419 09:52:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.419 09:52:48 -- common/autotest_common.sh@10 -- # set +x 00:05:50.419 09:52:48 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:50.419 09:52:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.419 09:52:48 -- common/autotest_common.sh@10 -- # set +x 00:05:50.419 09:52:49 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:50.419 09:52:49 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:50.419 09:52:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:50.678 MallocBdevForConfigChangeCheck 00:05:50.678 09:52:49 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:50.678 09:52:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.678 09:52:49 -- common/autotest_common.sh@10 -- # set +x 00:05:50.678 09:52:49 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:50.678 09:52:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.244 INFO: shutting down applications... 00:05:51.244 09:52:49 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:51.244 09:52:49 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:51.244 09:52:49 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:51.244 09:52:49 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:51.244 09:52:49 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:51.502 Calling clear_iscsi_subsystem 00:05:51.502 Calling clear_nvmf_subsystem 00:05:51.502 Calling clear_nbd_subsystem 00:05:51.502 Calling clear_ublk_subsystem 00:05:51.502 Calling clear_vhost_blk_subsystem 00:05:51.502 Calling clear_vhost_scsi_subsystem 00:05:51.502 Calling clear_scheduler_subsystem 00:05:51.502 Calling clear_bdev_subsystem 00:05:51.502 Calling clear_accel_subsystem 00:05:51.502 Calling clear_vmd_subsystem 00:05:51.502 Calling clear_sock_subsystem 00:05:51.502 Calling clear_iobuf_subsystem 00:05:51.502 09:52:49 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:51.502 09:52:49 -- json_config/json_config.sh@396 -- # count=100 00:05:51.502 09:52:49 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:51.502 09:52:49 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.502 09:52:49 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:51.502 09:52:49 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:52.069 09:52:50 -- json_config/json_config.sh@398 -- # break 00:05:52.069 09:52:50 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:52.069 09:52:50 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:52.069 09:52:50 -- json_config/json_config.sh@120 -- # local app=target 00:05:52.069 09:52:50 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:52.069 09:52:50 -- json_config/json_config.sh@124 -- # [[ -n 67936 ]] 00:05:52.069 09:52:50 -- json_config/json_config.sh@127 -- # kill -SIGINT 67936 00:05:52.069 09:52:50 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:52.069 09:52:50 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:52.069 09:52:50 -- json_config/json_config.sh@130 -- # kill -0 67936 00:05:52.069 09:52:50 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:52.327 09:52:50 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:52.327 09:52:50 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:52.327 09:52:50 -- json_config/json_config.sh@130 -- # kill -0 67936 00:05:52.327 09:52:50 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:52.327 SPDK target shutdown done 00:05:52.327 INFO: relaunching applications... 00:05:52.327 09:52:50 -- json_config/json_config.sh@132 -- # break 00:05:52.327 09:52:50 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:52.327 09:52:50 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:52.327 09:52:50 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:52.327 09:52:50 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.328 09:52:50 -- json_config/json_config.sh@98 -- # local app=target 00:05:52.328 09:52:50 -- json_config/json_config.sh@99 -- # shift 00:05:52.328 09:52:50 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:52.328 09:52:50 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:52.328 09:52:50 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:52.328 09:52:50 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:52.328 09:52:50 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:52.328 09:52:50 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:52.328 09:52:50 -- json_config/json_config.sh@111 -- # app_pid[$app]=68205 00:05:52.328 09:52:50 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:52.328 Waiting for target to run... 00:05:52.328 09:52:50 -- json_config/json_config.sh@114 -- # waitforlisten 68205 /var/tmp/spdk_tgt.sock 00:05:52.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:52.328 09:52:50 -- common/autotest_common.sh@829 -- # '[' -z 68205 ']' 00:05:52.328 09:52:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:52.328 09:52:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.328 09:52:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:52.328 09:52:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.328 09:52:50 -- common/autotest_common.sh@10 -- # set +x 00:05:52.328 [2024-12-16 09:52:50.950028] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.328 [2024-12-16 09:52:50.950120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68205 ] 00:05:52.894 [2024-12-16 09:52:51.372617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.894 [2024-12-16 09:52:51.419044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.894 [2024-12-16 09:52:51.419176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.153 [2024-12-16 09:52:51.715560] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.153 [2024-12-16 09:52:51.747654] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:53.411 00:05:53.411 INFO: Checking if target configuration is the same... 00:05:53.411 09:52:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.411 09:52:51 -- common/autotest_common.sh@862 -- # return 0 00:05:53.411 09:52:51 -- json_config/json_config.sh@115 -- # echo '' 00:05:53.411 09:52:51 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:53.411 09:52:51 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:53.411 09:52:51 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:53.411 09:52:51 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:53.411 09:52:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.411 + '[' 2 -ne 2 ']' 00:05:53.411 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:53.411 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:53.411 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:53.411 +++ basename /dev/fd/62 00:05:53.411 ++ mktemp /tmp/62.XXX 00:05:53.411 + tmp_file_1=/tmp/62.Snt 00:05:53.411 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:53.411 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:53.411 + tmp_file_2=/tmp/spdk_tgt_config.json.Q8u 00:05:53.411 + ret=0 00:05:53.411 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:53.978 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:53.978 + diff -u /tmp/62.Snt /tmp/spdk_tgt_config.json.Q8u 00:05:53.978 INFO: JSON config files are the same 00:05:53.978 + echo 'INFO: JSON config files are the same' 00:05:53.978 + rm /tmp/62.Snt /tmp/spdk_tgt_config.json.Q8u 00:05:53.978 + exit 0 00:05:53.978 INFO: changing configuration and checking if this can be detected... 00:05:53.978 09:52:52 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:53.978 09:52:52 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:53.978 09:52:52 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:53.978 09:52:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:54.237 09:52:52 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:54.237 09:52:52 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:54.237 09:52:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.237 + '[' 2 -ne 2 ']' 00:05:54.237 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:54.237 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:54.237 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:54.237 +++ basename /dev/fd/62 00:05:54.237 ++ mktemp /tmp/62.XXX 00:05:54.237 + tmp_file_1=/tmp/62.2Hd 00:05:54.237 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:54.237 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:54.237 + tmp_file_2=/tmp/spdk_tgt_config.json.OJt 00:05:54.237 + ret=0 00:05:54.237 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:54.495 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:54.495 + diff -u /tmp/62.2Hd /tmp/spdk_tgt_config.json.OJt 00:05:54.495 + ret=1 00:05:54.495 + echo '=== Start of file: /tmp/62.2Hd ===' 00:05:54.495 + cat /tmp/62.2Hd 00:05:54.495 + echo '=== End of file: /tmp/62.2Hd ===' 00:05:54.495 + echo '' 00:05:54.495 + echo '=== Start of file: /tmp/spdk_tgt_config.json.OJt ===' 00:05:54.495 + cat /tmp/spdk_tgt_config.json.OJt 00:05:54.495 + echo '=== End of file: /tmp/spdk_tgt_config.json.OJt ===' 00:05:54.495 + echo '' 00:05:54.495 + rm /tmp/62.2Hd /tmp/spdk_tgt_config.json.OJt 00:05:54.495 + exit 1 00:05:54.495 INFO: configuration change detected. 00:05:54.495 09:52:53 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:54.495 09:52:53 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:54.495 09:52:53 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:54.495 09:52:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:54.495 09:52:53 -- common/autotest_common.sh@10 -- # set +x 00:05:54.495 09:52:53 -- json_config/json_config.sh@360 -- # local ret=0 00:05:54.495 09:52:53 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:54.495 09:52:53 -- json_config/json_config.sh@370 -- # [[ -n 68205 ]] 00:05:54.495 09:52:53 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:54.495 09:52:53 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:54.495 09:52:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:54.495 09:52:53 -- common/autotest_common.sh@10 -- # set +x 00:05:54.495 09:52:53 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:54.495 09:52:53 -- json_config/json_config.sh@246 -- # uname -s 00:05:54.754 09:52:53 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:54.754 09:52:53 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:54.754 09:52:53 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:54.754 09:52:53 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:54.754 09:52:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:54.754 09:52:53 -- common/autotest_common.sh@10 -- # set +x 00:05:54.754 09:52:53 -- json_config/json_config.sh@376 -- # killprocess 68205 00:05:54.754 09:52:53 -- common/autotest_common.sh@936 -- # '[' -z 68205 ']' 00:05:54.754 09:52:53 -- common/autotest_common.sh@940 -- # kill -0 68205 00:05:54.754 09:52:53 -- common/autotest_common.sh@941 -- # uname 00:05:54.754 09:52:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.754 09:52:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68205 00:05:54.754 killing process with pid 68205 00:05:54.754 09:52:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.754 09:52:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.754 09:52:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68205' 00:05:54.754 09:52:53 -- common/autotest_common.sh@955 -- # kill 68205 00:05:54.754 09:52:53 -- common/autotest_common.sh@960 -- # wait 68205 00:05:55.013 09:52:53 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:55.013 09:52:53 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:55.013 09:52:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:55.013 09:52:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.013 09:52:53 -- json_config/json_config.sh@381 -- # return 0 00:05:55.013 INFO: Success 00:05:55.013 09:52:53 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:55.013 ************************************ 00:05:55.013 END TEST json_config 00:05:55.013 ************************************ 00:05:55.013 00:05:55.013 real 0m8.189s 00:05:55.013 user 0m11.658s 00:05:55.013 sys 0m1.749s 00:05:55.013 09:52:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.013 09:52:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.013 09:52:53 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:55.013 09:52:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.013 09:52:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.013 09:52:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.013 ************************************ 00:05:55.013 START TEST json_config_extra_key 00:05:55.013 ************************************ 00:05:55.013 09:52:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:55.013 09:52:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:55.013 09:52:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:55.013 09:52:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:55.013 09:52:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:55.013 09:52:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:55.013 09:52:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:55.013 09:52:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:55.013 09:52:53 -- scripts/common.sh@335 -- # IFS=.-: 00:05:55.013 09:52:53 -- scripts/common.sh@335 -- # read -ra ver1 00:05:55.013 09:52:53 -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.013 09:52:53 -- scripts/common.sh@336 -- # read -ra ver2 00:05:55.013 09:52:53 -- scripts/common.sh@337 -- # local 'op=<' 00:05:55.013 09:52:53 -- scripts/common.sh@339 -- # ver1_l=2 00:05:55.013 09:52:53 -- scripts/common.sh@340 -- # ver2_l=1 00:05:55.013 09:52:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:55.013 09:52:53 -- scripts/common.sh@343 -- # case "$op" in 00:05:55.013 09:52:53 -- scripts/common.sh@344 -- # : 1 00:05:55.013 09:52:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:55.013 09:52:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.013 09:52:53 -- scripts/common.sh@364 -- # decimal 1 00:05:55.272 09:52:53 -- scripts/common.sh@352 -- # local d=1 00:05:55.272 09:52:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.272 09:52:53 -- scripts/common.sh@354 -- # echo 1 00:05:55.272 09:52:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:55.272 09:52:53 -- scripts/common.sh@365 -- # decimal 2 00:05:55.272 09:52:53 -- scripts/common.sh@352 -- # local d=2 00:05:55.272 09:52:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.272 09:52:53 -- scripts/common.sh@354 -- # echo 2 00:05:55.272 09:52:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:55.272 09:52:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:55.272 09:52:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:55.272 09:52:53 -- scripts/common.sh@367 -- # return 0 00:05:55.272 09:52:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.272 09:52:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:55.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.272 --rc genhtml_branch_coverage=1 00:05:55.272 --rc genhtml_function_coverage=1 00:05:55.272 --rc genhtml_legend=1 00:05:55.272 --rc geninfo_all_blocks=1 00:05:55.272 --rc geninfo_unexecuted_blocks=1 00:05:55.272 00:05:55.272 ' 00:05:55.272 09:52:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:55.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.272 --rc genhtml_branch_coverage=1 00:05:55.272 --rc genhtml_function_coverage=1 00:05:55.272 --rc genhtml_legend=1 00:05:55.272 --rc geninfo_all_blocks=1 00:05:55.272 --rc geninfo_unexecuted_blocks=1 00:05:55.272 00:05:55.272 ' 00:05:55.272 09:52:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:55.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.272 --rc genhtml_branch_coverage=1 00:05:55.272 --rc genhtml_function_coverage=1 00:05:55.272 --rc genhtml_legend=1 00:05:55.272 --rc geninfo_all_blocks=1 00:05:55.272 --rc geninfo_unexecuted_blocks=1 00:05:55.272 00:05:55.272 ' 00:05:55.272 09:52:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:55.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.272 --rc genhtml_branch_coverage=1 00:05:55.272 --rc genhtml_function_coverage=1 00:05:55.272 --rc genhtml_legend=1 00:05:55.272 --rc geninfo_all_blocks=1 00:05:55.272 --rc geninfo_unexecuted_blocks=1 00:05:55.272 00:05:55.272 ' 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:55.272 09:52:53 -- nvmf/common.sh@7 -- # uname -s 00:05:55.272 09:52:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.272 09:52:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.272 09:52:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.272 09:52:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.272 09:52:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.272 09:52:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.272 09:52:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.272 09:52:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.272 09:52:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.272 09:52:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.272 09:52:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:05:55.272 09:52:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:05:55.272 09:52:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.272 09:52:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.272 09:52:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:55.272 09:52:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:55.272 09:52:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.272 09:52:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.272 09:52:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.272 09:52:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.272 09:52:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.272 09:52:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.272 09:52:53 -- paths/export.sh@5 -- # export PATH 00:05:55.272 09:52:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.272 09:52:53 -- nvmf/common.sh@46 -- # : 0 00:05:55.272 09:52:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:55.272 09:52:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:55.272 09:52:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:55.272 09:52:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.272 09:52:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.272 09:52:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:55.272 09:52:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:55.272 09:52:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:55.272 INFO: launching applications... 00:05:55.272 Waiting for target to run... 00:05:55.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68388 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68388 /var/tmp/spdk_tgt.sock 00:05:55.272 09:52:53 -- common/autotest_common.sh@829 -- # '[' -z 68388 ']' 00:05:55.272 09:52:53 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:55.272 09:52:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:55.272 09:52:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.272 09:52:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:55.272 09:52:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.272 09:52:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.272 [2024-12-16 09:52:53.733595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.272 [2024-12-16 09:52:53.733886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68388 ] 00:05:55.839 [2024-12-16 09:52:54.157913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.839 [2024-12-16 09:52:54.201043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.839 [2024-12-16 09:52:54.201470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.406 09:52:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.406 09:52:54 -- common/autotest_common.sh@862 -- # return 0 00:05:56.406 09:52:54 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:56.406 00:05:56.406 09:52:54 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:56.406 INFO: shutting down applications... 00:05:56.406 09:52:54 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:56.406 09:52:54 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:56.406 09:52:54 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:56.406 09:52:54 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68388 ]] 00:05:56.406 09:52:54 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68388 00:05:56.406 09:52:54 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:56.406 09:52:54 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:56.406 09:52:54 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68388 00:05:56.406 09:52:54 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:56.663 09:52:55 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:56.663 09:52:55 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:56.663 09:52:55 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68388 00:05:56.663 09:52:55 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:56.663 09:52:55 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:56.663 09:52:55 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:56.663 SPDK target shutdown done 00:05:56.663 Success 00:05:56.663 09:52:55 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:56.663 09:52:55 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:56.663 00:05:56.663 real 0m1.772s 00:05:56.663 user 0m1.644s 00:05:56.663 sys 0m0.468s 00:05:56.663 ************************************ 00:05:56.663 END TEST json_config_extra_key 00:05:56.663 ************************************ 00:05:56.663 09:52:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.663 09:52:55 -- common/autotest_common.sh@10 -- # set +x 00:05:56.921 09:52:55 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:56.921 09:52:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.921 09:52:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.921 09:52:55 -- common/autotest_common.sh@10 -- # set +x 00:05:56.921 ************************************ 00:05:56.921 START TEST alias_rpc 00:05:56.921 ************************************ 00:05:56.921 09:52:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:56.921 * Looking for test storage... 00:05:56.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:56.921 09:52:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:56.921 09:52:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:56.921 09:52:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:56.921 09:52:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:56.921 09:52:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:56.921 09:52:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:56.921 09:52:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:56.921 09:52:55 -- scripts/common.sh@335 -- # IFS=.-: 00:05:56.921 09:52:55 -- scripts/common.sh@335 -- # read -ra ver1 00:05:56.921 09:52:55 -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.921 09:52:55 -- scripts/common.sh@336 -- # read -ra ver2 00:05:56.921 09:52:55 -- scripts/common.sh@337 -- # local 'op=<' 00:05:56.921 09:52:55 -- scripts/common.sh@339 -- # ver1_l=2 00:05:56.921 09:52:55 -- scripts/common.sh@340 -- # ver2_l=1 00:05:56.921 09:52:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:56.921 09:52:55 -- scripts/common.sh@343 -- # case "$op" in 00:05:56.922 09:52:55 -- scripts/common.sh@344 -- # : 1 00:05:56.922 09:52:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:56.922 09:52:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.922 09:52:55 -- scripts/common.sh@364 -- # decimal 1 00:05:56.922 09:52:55 -- scripts/common.sh@352 -- # local d=1 00:05:56.922 09:52:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.922 09:52:55 -- scripts/common.sh@354 -- # echo 1 00:05:56.922 09:52:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:56.922 09:52:55 -- scripts/common.sh@365 -- # decimal 2 00:05:56.922 09:52:55 -- scripts/common.sh@352 -- # local d=2 00:05:56.922 09:52:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.922 09:52:55 -- scripts/common.sh@354 -- # echo 2 00:05:56.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.922 09:52:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:56.922 09:52:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:56.922 09:52:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:56.922 09:52:55 -- scripts/common.sh@367 -- # return 0 00:05:56.922 09:52:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.922 09:52:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:56.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.922 --rc genhtml_branch_coverage=1 00:05:56.922 --rc genhtml_function_coverage=1 00:05:56.922 --rc genhtml_legend=1 00:05:56.922 --rc geninfo_all_blocks=1 00:05:56.922 --rc geninfo_unexecuted_blocks=1 00:05:56.922 00:05:56.922 ' 00:05:56.922 09:52:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:56.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.922 --rc genhtml_branch_coverage=1 00:05:56.922 --rc genhtml_function_coverage=1 00:05:56.922 --rc genhtml_legend=1 00:05:56.922 --rc geninfo_all_blocks=1 00:05:56.922 --rc geninfo_unexecuted_blocks=1 00:05:56.922 00:05:56.922 ' 00:05:56.922 09:52:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:56.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.922 --rc genhtml_branch_coverage=1 00:05:56.922 --rc genhtml_function_coverage=1 00:05:56.922 --rc genhtml_legend=1 00:05:56.922 --rc geninfo_all_blocks=1 00:05:56.922 --rc geninfo_unexecuted_blocks=1 00:05:56.922 00:05:56.922 ' 00:05:56.922 09:52:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:56.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.922 --rc genhtml_branch_coverage=1 00:05:56.922 --rc genhtml_function_coverage=1 00:05:56.922 --rc genhtml_legend=1 00:05:56.922 --rc geninfo_all_blocks=1 00:05:56.922 --rc geninfo_unexecuted_blocks=1 00:05:56.922 00:05:56.922 ' 00:05:56.922 09:52:55 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:56.922 09:52:55 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68470 00:05:56.922 09:52:55 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68470 00:05:56.922 09:52:55 -- common/autotest_common.sh@829 -- # '[' -z 68470 ']' 00:05:56.922 09:52:55 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.922 09:52:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.922 09:52:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.922 09:52:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.922 09:52:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.922 09:52:55 -- common/autotest_common.sh@10 -- # set +x 00:05:57.180 [2024-12-16 09:52:55.562511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.180 [2024-12-16 09:52:55.563012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68470 ] 00:05:57.180 [2024-12-16 09:52:55.699289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.180 [2024-12-16 09:52:55.753670] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.180 [2024-12-16 09:52:55.754126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.196 09:52:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.196 09:52:56 -- common/autotest_common.sh@862 -- # return 0 00:05:58.196 09:52:56 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:58.196 09:52:56 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68470 00:05:58.196 09:52:56 -- common/autotest_common.sh@936 -- # '[' -z 68470 ']' 00:05:58.196 09:52:56 -- common/autotest_common.sh@940 -- # kill -0 68470 00:05:58.196 09:52:56 -- common/autotest_common.sh@941 -- # uname 00:05:58.196 09:52:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.196 09:52:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68470 00:05:58.454 killing process with pid 68470 00:05:58.454 09:52:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:58.454 09:52:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:58.454 09:52:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68470' 00:05:58.454 09:52:56 -- common/autotest_common.sh@955 -- # kill 68470 00:05:58.454 09:52:56 -- common/autotest_common.sh@960 -- # wait 68470 00:05:58.712 ************************************ 00:05:58.713 END TEST alias_rpc 00:05:58.713 ************************************ 00:05:58.713 00:05:58.713 real 0m1.862s 00:05:58.713 user 0m2.109s 00:05:58.713 sys 0m0.439s 00:05:58.713 09:52:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.713 09:52:57 -- common/autotest_common.sh@10 -- # set +x 00:05:58.713 09:52:57 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:05:58.713 09:52:57 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.713 09:52:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.713 09:52:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.713 09:52:57 -- common/autotest_common.sh@10 -- # set +x 00:05:58.713 ************************************ 00:05:58.713 START TEST dpdk_mem_utility 00:05:58.713 ************************************ 00:05:58.713 09:52:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.713 * Looking for test storage... 00:05:58.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:58.713 09:52:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:58.713 09:52:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:58.713 09:52:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:58.971 09:52:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:58.971 09:52:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:58.971 09:52:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:58.971 09:52:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:58.971 09:52:57 -- scripts/common.sh@335 -- # IFS=.-: 00:05:58.971 09:52:57 -- scripts/common.sh@335 -- # read -ra ver1 00:05:58.971 09:52:57 -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.971 09:52:57 -- scripts/common.sh@336 -- # read -ra ver2 00:05:58.971 09:52:57 -- scripts/common.sh@337 -- # local 'op=<' 00:05:58.971 09:52:57 -- scripts/common.sh@339 -- # ver1_l=2 00:05:58.971 09:52:57 -- scripts/common.sh@340 -- # ver2_l=1 00:05:58.971 09:52:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:58.971 09:52:57 -- scripts/common.sh@343 -- # case "$op" in 00:05:58.971 09:52:57 -- scripts/common.sh@344 -- # : 1 00:05:58.971 09:52:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:58.971 09:52:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.971 09:52:57 -- scripts/common.sh@364 -- # decimal 1 00:05:58.971 09:52:57 -- scripts/common.sh@352 -- # local d=1 00:05:58.971 09:52:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.971 09:52:57 -- scripts/common.sh@354 -- # echo 1 00:05:58.971 09:52:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:58.971 09:52:57 -- scripts/common.sh@365 -- # decimal 2 00:05:58.971 09:52:57 -- scripts/common.sh@352 -- # local d=2 00:05:58.971 09:52:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.971 09:52:57 -- scripts/common.sh@354 -- # echo 2 00:05:58.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.971 09:52:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:58.971 09:52:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:58.971 09:52:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:58.971 09:52:57 -- scripts/common.sh@367 -- # return 0 00:05:58.971 09:52:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.971 09:52:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:58.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.971 --rc genhtml_branch_coverage=1 00:05:58.971 --rc genhtml_function_coverage=1 00:05:58.971 --rc genhtml_legend=1 00:05:58.971 --rc geninfo_all_blocks=1 00:05:58.971 --rc geninfo_unexecuted_blocks=1 00:05:58.971 00:05:58.971 ' 00:05:58.971 09:52:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:58.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.971 --rc genhtml_branch_coverage=1 00:05:58.971 --rc genhtml_function_coverage=1 00:05:58.971 --rc genhtml_legend=1 00:05:58.971 --rc geninfo_all_blocks=1 00:05:58.971 --rc geninfo_unexecuted_blocks=1 00:05:58.971 00:05:58.971 ' 00:05:58.971 09:52:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:58.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.971 --rc genhtml_branch_coverage=1 00:05:58.971 --rc genhtml_function_coverage=1 00:05:58.971 --rc genhtml_legend=1 00:05:58.971 --rc geninfo_all_blocks=1 00:05:58.971 --rc geninfo_unexecuted_blocks=1 00:05:58.971 00:05:58.971 ' 00:05:58.971 09:52:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:58.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.971 --rc genhtml_branch_coverage=1 00:05:58.971 --rc genhtml_function_coverage=1 00:05:58.971 --rc genhtml_legend=1 00:05:58.971 --rc geninfo_all_blocks=1 00:05:58.971 --rc geninfo_unexecuted_blocks=1 00:05:58.971 00:05:58.971 ' 00:05:58.971 09:52:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:58.971 09:52:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68565 00:05:58.971 09:52:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68565 00:05:58.971 09:52:57 -- common/autotest_common.sh@829 -- # '[' -z 68565 ']' 00:05:58.971 09:52:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:58.971 09:52:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.971 09:52:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.971 09:52:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.971 09:52:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.971 09:52:57 -- common/autotest_common.sh@10 -- # set +x 00:05:58.971 [2024-12-16 09:52:57.466550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.971 [2024-12-16 09:52:57.466885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68565 ] 00:05:59.230 [2024-12-16 09:52:57.601853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.230 [2024-12-16 09:52:57.659989] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.230 [2024-12-16 09:52:57.660365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.166 09:52:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.166 09:52:58 -- common/autotest_common.sh@862 -- # return 0 00:06:00.166 09:52:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:00.166 09:52:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:00.166 09:52:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.166 09:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:00.166 { 00:06:00.166 "filename": "/tmp/spdk_mem_dump.txt" 00:06:00.166 } 00:06:00.166 09:52:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.166 09:52:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:00.166 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:00.166 1 heaps totaling size 814.000000 MiB 00:06:00.166 size: 814.000000 MiB heap id: 0 00:06:00.166 end heaps---------- 00:06:00.166 8 mempools totaling size 598.116089 MiB 00:06:00.166 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:00.166 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:00.166 size: 84.521057 MiB name: bdev_io_68565 00:06:00.166 size: 51.011292 MiB name: evtpool_68565 00:06:00.166 size: 50.003479 MiB name: msgpool_68565 00:06:00.166 size: 21.763794 MiB name: PDU_Pool 00:06:00.166 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:00.166 size: 0.026123 MiB name: Session_Pool 00:06:00.166 end mempools------- 00:06:00.166 6 memzones totaling size 4.142822 MiB 00:06:00.166 size: 1.000366 MiB name: RG_ring_0_68565 00:06:00.166 size: 1.000366 MiB name: RG_ring_1_68565 00:06:00.166 size: 1.000366 MiB name: RG_ring_4_68565 00:06:00.166 size: 1.000366 MiB name: RG_ring_5_68565 00:06:00.166 size: 0.125366 MiB name: RG_ring_2_68565 00:06:00.166 size: 0.015991 MiB name: RG_ring_3_68565 00:06:00.166 end memzones------- 00:06:00.166 09:52:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:00.166 heap id: 0 total size: 814.000000 MiB number of busy elements: 213 number of free elements: 15 00:06:00.166 list of free elements. size: 12.487854 MiB 00:06:00.166 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:00.166 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:00.166 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:00.166 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:00.166 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:00.166 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:00.166 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:00.166 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:00.166 element at address: 0x200000200000 with size: 0.837219 MiB 00:06:00.166 element at address: 0x20001aa00000 with size: 0.572632 MiB 00:06:00.166 element at address: 0x20000b200000 with size: 0.489990 MiB 00:06:00.166 element at address: 0x200000800000 with size: 0.487061 MiB 00:06:00.166 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:00.166 element at address: 0x200027e00000 with size: 0.398499 MiB 00:06:00.166 element at address: 0x200003a00000 with size: 0.351685 MiB 00:06:00.166 list of standard malloc elements. size: 199.249573 MiB 00:06:00.166 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:00.166 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:00.166 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:00.166 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:00.166 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:00.166 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:00.166 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:00.166 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:00.166 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:00.166 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:00.166 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:00.166 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:00.166 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:00.167 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e66040 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e66100 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6cd00 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:00.167 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:00.167 list of memzone associated elements. size: 602.262573 MiB 00:06:00.167 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:00.167 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:00.167 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:00.167 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:00.167 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:00.167 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68565_0 00:06:00.167 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:00.167 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68565_0 00:06:00.167 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:00.167 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68565_0 00:06:00.167 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:00.167 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:00.167 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:00.167 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:00.167 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:00.167 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68565 00:06:00.167 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:00.167 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68565 00:06:00.167 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:00.167 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68565 00:06:00.167 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:00.167 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:00.167 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:00.167 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:00.167 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:00.167 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:00.167 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:00.167 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:00.167 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:00.167 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68565 00:06:00.167 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:00.167 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68565 00:06:00.167 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:00.167 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68565 00:06:00.167 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:00.167 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68565 00:06:00.167 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:00.167 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68565 00:06:00.167 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:00.167 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:00.167 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:00.167 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:00.168 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:00.168 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:00.168 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:00.168 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68565 00:06:00.168 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:00.168 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:00.168 element at address: 0x200027e661c0 with size: 0.023743 MiB 00:06:00.168 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:00.168 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:00.168 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68565 00:06:00.168 element at address: 0x200027e6c300 with size: 0.002441 MiB 00:06:00.168 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:00.168 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:06:00.168 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68565 00:06:00.168 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:00.168 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68565 00:06:00.168 element at address: 0x200027e6cdc0 with size: 0.000305 MiB 00:06:00.168 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:00.168 09:52:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:00.168 09:52:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68565 00:06:00.168 09:52:58 -- common/autotest_common.sh@936 -- # '[' -z 68565 ']' 00:06:00.168 09:52:58 -- common/autotest_common.sh@940 -- # kill -0 68565 00:06:00.168 09:52:58 -- common/autotest_common.sh@941 -- # uname 00:06:00.168 09:52:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.168 09:52:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68565 00:06:00.168 killing process with pid 68565 00:06:00.168 09:52:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:00.168 09:52:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:00.168 09:52:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68565' 00:06:00.168 09:52:58 -- common/autotest_common.sh@955 -- # kill 68565 00:06:00.168 09:52:58 -- common/autotest_common.sh@960 -- # wait 68565 00:06:00.427 ************************************ 00:06:00.427 END TEST dpdk_mem_utility 00:06:00.427 ************************************ 00:06:00.427 00:06:00.427 real 0m1.781s 00:06:00.427 user 0m1.951s 00:06:00.427 sys 0m0.450s 00:06:00.427 09:52:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.427 09:52:59 -- common/autotest_common.sh@10 -- # set +x 00:06:00.427 09:52:59 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:00.427 09:52:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.427 09:52:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.427 09:52:59 -- common/autotest_common.sh@10 -- # set +x 00:06:00.685 ************************************ 00:06:00.685 START TEST event 00:06:00.685 ************************************ 00:06:00.685 09:52:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:00.685 * Looking for test storage... 00:06:00.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:00.685 09:52:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:00.685 09:52:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:00.685 09:52:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:00.685 09:52:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:00.685 09:52:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:00.685 09:52:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:00.685 09:52:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:00.685 09:52:59 -- scripts/common.sh@335 -- # IFS=.-: 00:06:00.685 09:52:59 -- scripts/common.sh@335 -- # read -ra ver1 00:06:00.685 09:52:59 -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.685 09:52:59 -- scripts/common.sh@336 -- # read -ra ver2 00:06:00.685 09:52:59 -- scripts/common.sh@337 -- # local 'op=<' 00:06:00.685 09:52:59 -- scripts/common.sh@339 -- # ver1_l=2 00:06:00.685 09:52:59 -- scripts/common.sh@340 -- # ver2_l=1 00:06:00.685 09:52:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:00.685 09:52:59 -- scripts/common.sh@343 -- # case "$op" in 00:06:00.685 09:52:59 -- scripts/common.sh@344 -- # : 1 00:06:00.685 09:52:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:00.685 09:52:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.685 09:52:59 -- scripts/common.sh@364 -- # decimal 1 00:06:00.685 09:52:59 -- scripts/common.sh@352 -- # local d=1 00:06:00.685 09:52:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.686 09:52:59 -- scripts/common.sh@354 -- # echo 1 00:06:00.686 09:52:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:00.686 09:52:59 -- scripts/common.sh@365 -- # decimal 2 00:06:00.686 09:52:59 -- scripts/common.sh@352 -- # local d=2 00:06:00.686 09:52:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.686 09:52:59 -- scripts/common.sh@354 -- # echo 2 00:06:00.686 09:52:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:00.686 09:52:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:00.686 09:52:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:00.686 09:52:59 -- scripts/common.sh@367 -- # return 0 00:06:00.686 09:52:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.686 09:52:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:00.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.686 --rc genhtml_branch_coverage=1 00:06:00.686 --rc genhtml_function_coverage=1 00:06:00.686 --rc genhtml_legend=1 00:06:00.686 --rc geninfo_all_blocks=1 00:06:00.686 --rc geninfo_unexecuted_blocks=1 00:06:00.686 00:06:00.686 ' 00:06:00.686 09:52:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:00.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.686 --rc genhtml_branch_coverage=1 00:06:00.686 --rc genhtml_function_coverage=1 00:06:00.686 --rc genhtml_legend=1 00:06:00.686 --rc geninfo_all_blocks=1 00:06:00.686 --rc geninfo_unexecuted_blocks=1 00:06:00.686 00:06:00.686 ' 00:06:00.686 09:52:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:00.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.686 --rc genhtml_branch_coverage=1 00:06:00.686 --rc genhtml_function_coverage=1 00:06:00.686 --rc genhtml_legend=1 00:06:00.686 --rc geninfo_all_blocks=1 00:06:00.686 --rc geninfo_unexecuted_blocks=1 00:06:00.686 00:06:00.686 ' 00:06:00.686 09:52:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:00.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.686 --rc genhtml_branch_coverage=1 00:06:00.686 --rc genhtml_function_coverage=1 00:06:00.686 --rc genhtml_legend=1 00:06:00.686 --rc geninfo_all_blocks=1 00:06:00.686 --rc geninfo_unexecuted_blocks=1 00:06:00.686 00:06:00.686 ' 00:06:00.686 09:52:59 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:00.686 09:52:59 -- bdev/nbd_common.sh@6 -- # set -e 00:06:00.686 09:52:59 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.686 09:52:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:00.686 09:52:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.686 09:52:59 -- common/autotest_common.sh@10 -- # set +x 00:06:00.686 ************************************ 00:06:00.686 START TEST event_perf 00:06:00.686 ************************************ 00:06:00.686 09:52:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.686 Running I/O for 1 seconds...[2024-12-16 09:52:59.274832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.686 [2024-12-16 09:52:59.275058] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68667 ] 00:06:00.944 [2024-12-16 09:52:59.411706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.944 [2024-12-16 09:52:59.466292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.944 [2024-12-16 09:52:59.466424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.944 [2024-12-16 09:52:59.466555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.944 [2024-12-16 09:52:59.466560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.330 Running I/O for 1 seconds... 00:06:02.330 lcore 0: 208786 00:06:02.330 lcore 1: 208788 00:06:02.330 lcore 2: 208788 00:06:02.330 lcore 3: 208787 00:06:02.330 done. 00:06:02.330 00:06:02.330 ************************************ 00:06:02.330 END TEST event_perf 00:06:02.330 ************************************ 00:06:02.330 real 0m1.274s 00:06:02.330 user 0m4.090s 00:06:02.330 sys 0m0.066s 00:06:02.330 09:53:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.330 09:53:00 -- common/autotest_common.sh@10 -- # set +x 00:06:02.330 09:53:00 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:02.330 09:53:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:02.330 09:53:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.330 09:53:00 -- common/autotest_common.sh@10 -- # set +x 00:06:02.330 ************************************ 00:06:02.330 START TEST event_reactor 00:06:02.330 ************************************ 00:06:02.331 09:53:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:02.331 [2024-12-16 09:53:00.599524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.331 [2024-12-16 09:53:00.599616] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68700 ] 00:06:02.331 [2024-12-16 09:53:00.729432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.331 [2024-12-16 09:53:00.782786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.266 test_start 00:06:03.266 oneshot 00:06:03.266 tick 100 00:06:03.266 tick 100 00:06:03.266 tick 250 00:06:03.266 tick 100 00:06:03.266 tick 100 00:06:03.266 tick 100 00:06:03.266 tick 250 00:06:03.266 tick 500 00:06:03.266 tick 100 00:06:03.266 tick 100 00:06:03.266 tick 250 00:06:03.266 tick 100 00:06:03.266 tick 100 00:06:03.266 test_end 00:06:03.266 ************************************ 00:06:03.266 END TEST event_reactor 00:06:03.266 ************************************ 00:06:03.266 00:06:03.266 real 0m1.252s 00:06:03.266 user 0m1.099s 00:06:03.266 sys 0m0.048s 00:06:03.266 09:53:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.266 09:53:01 -- common/autotest_common.sh@10 -- # set +x 00:06:03.266 09:53:01 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:03.266 09:53:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:03.266 09:53:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.266 09:53:01 -- common/autotest_common.sh@10 -- # set +x 00:06:03.525 ************************************ 00:06:03.525 START TEST event_reactor_perf 00:06:03.525 ************************************ 00:06:03.525 09:53:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:03.525 [2024-12-16 09:53:01.905209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.525 [2024-12-16 09:53:01.905302] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68741 ] 00:06:03.525 [2024-12-16 09:53:02.041849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.525 [2024-12-16 09:53:02.089911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.901 test_start 00:06:04.901 test_end 00:06:04.901 Performance: 472878 events per second 00:06:04.901 00:06:04.901 real 0m1.252s 00:06:04.901 user 0m1.091s 00:06:04.901 sys 0m0.057s 00:06:04.901 09:53:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.901 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:04.901 ************************************ 00:06:04.901 END TEST event_reactor_perf 00:06:04.901 ************************************ 00:06:04.901 09:53:03 -- event/event.sh@49 -- # uname -s 00:06:04.901 09:53:03 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:04.901 09:53:03 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:04.901 09:53:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.901 09:53:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.901 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:04.901 ************************************ 00:06:04.901 START TEST event_scheduler 00:06:04.901 ************************************ 00:06:04.901 09:53:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:04.901 * Looking for test storage... 00:06:04.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:04.901 09:53:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:04.901 09:53:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:04.901 09:53:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:04.901 09:53:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:04.901 09:53:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:04.901 09:53:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:04.901 09:53:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:04.901 09:53:03 -- scripts/common.sh@335 -- # IFS=.-: 00:06:04.901 09:53:03 -- scripts/common.sh@335 -- # read -ra ver1 00:06:04.901 09:53:03 -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.901 09:53:03 -- scripts/common.sh@336 -- # read -ra ver2 00:06:04.901 09:53:03 -- scripts/common.sh@337 -- # local 'op=<' 00:06:04.901 09:53:03 -- scripts/common.sh@339 -- # ver1_l=2 00:06:04.901 09:53:03 -- scripts/common.sh@340 -- # ver2_l=1 00:06:04.901 09:53:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:04.901 09:53:03 -- scripts/common.sh@343 -- # case "$op" in 00:06:04.901 09:53:03 -- scripts/common.sh@344 -- # : 1 00:06:04.901 09:53:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:04.901 09:53:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.901 09:53:03 -- scripts/common.sh@364 -- # decimal 1 00:06:04.901 09:53:03 -- scripts/common.sh@352 -- # local d=1 00:06:04.901 09:53:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.901 09:53:03 -- scripts/common.sh@354 -- # echo 1 00:06:04.901 09:53:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:04.901 09:53:03 -- scripts/common.sh@365 -- # decimal 2 00:06:04.901 09:53:03 -- scripts/common.sh@352 -- # local d=2 00:06:04.901 09:53:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.901 09:53:03 -- scripts/common.sh@354 -- # echo 2 00:06:04.901 09:53:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:04.901 09:53:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:04.901 09:53:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:04.901 09:53:03 -- scripts/common.sh@367 -- # return 0 00:06:04.901 09:53:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.901 09:53:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:04.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.901 --rc genhtml_branch_coverage=1 00:06:04.901 --rc genhtml_function_coverage=1 00:06:04.901 --rc genhtml_legend=1 00:06:04.901 --rc geninfo_all_blocks=1 00:06:04.901 --rc geninfo_unexecuted_blocks=1 00:06:04.901 00:06:04.901 ' 00:06:04.901 09:53:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:04.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.901 --rc genhtml_branch_coverage=1 00:06:04.901 --rc genhtml_function_coverage=1 00:06:04.901 --rc genhtml_legend=1 00:06:04.901 --rc geninfo_all_blocks=1 00:06:04.901 --rc geninfo_unexecuted_blocks=1 00:06:04.901 00:06:04.901 ' 00:06:04.901 09:53:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:04.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.901 --rc genhtml_branch_coverage=1 00:06:04.901 --rc genhtml_function_coverage=1 00:06:04.901 --rc genhtml_legend=1 00:06:04.901 --rc geninfo_all_blocks=1 00:06:04.901 --rc geninfo_unexecuted_blocks=1 00:06:04.901 00:06:04.901 ' 00:06:04.901 09:53:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:04.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.901 --rc genhtml_branch_coverage=1 00:06:04.901 --rc genhtml_function_coverage=1 00:06:04.901 --rc genhtml_legend=1 00:06:04.901 --rc geninfo_all_blocks=1 00:06:04.901 --rc geninfo_unexecuted_blocks=1 00:06:04.901 00:06:04.901 ' 00:06:04.901 09:53:03 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:04.901 09:53:03 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68804 00:06:04.901 09:53:03 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.901 09:53:03 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:04.901 09:53:03 -- scheduler/scheduler.sh@37 -- # waitforlisten 68804 00:06:04.901 09:53:03 -- common/autotest_common.sh@829 -- # '[' -z 68804 ']' 00:06:04.901 09:53:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.901 09:53:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.901 09:53:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.901 09:53:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.901 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:04.901 [2024-12-16 09:53:03.432200] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.901 [2024-12-16 09:53:03.432913] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68804 ] 00:06:05.160 [2024-12-16 09:53:03.573302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.160 [2024-12-16 09:53:03.639971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.160 [2024-12-16 09:53:03.640114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.160 [2024-12-16 09:53:03.640938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.160 [2024-12-16 09:53:03.641012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.160 09:53:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.160 09:53:03 -- common/autotest_common.sh@862 -- # return 0 00:06:05.160 09:53:03 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:05.160 09:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.160 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.160 POWER: Env isn't set yet! 00:06:05.160 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:05.160 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.160 POWER: Cannot set governor of lcore 0 to userspace 00:06:05.160 POWER: Attempting to initialise PSTAT power management... 00:06:05.160 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.160 POWER: Cannot set governor of lcore 0 to performance 00:06:05.160 POWER: Attempting to initialise AMD PSTATE power management... 00:06:05.160 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.160 POWER: Cannot set governor of lcore 0 to userspace 00:06:05.160 POWER: Attempting to initialise CPPC power management... 00:06:05.160 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:05.160 POWER: Cannot set governor of lcore 0 to userspace 00:06:05.160 POWER: Attempting to initialise VM power management... 00:06:05.160 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:05.160 POWER: Unable to set Power Management Environment for lcore 0 00:06:05.160 [2024-12-16 09:53:03.691341] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:05.160 [2024-12-16 09:53:03.691518] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:05.160 [2024-12-16 09:53:03.691569] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:05.160 [2024-12-16 09:53:03.691731] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:05.160 [2024-12-16 09:53:03.691843] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:05.160 [2024-12-16 09:53:03.691890] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:05.160 09:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.160 09:53:03 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:05.160 09:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.160 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.418 [2024-12-16 09:53:03.786672] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:05.418 09:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.418 09:53:03 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:05.418 09:53:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.418 09:53:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.419 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.419 ************************************ 00:06:05.419 START TEST scheduler_create_thread 00:06:05.419 ************************************ 00:06:05.419 09:53:03 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:05.419 09:53:03 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:05.419 09:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.419 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.419 2 00:06:05.419 09:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.419 09:53:03 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:05.419 09:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.419 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.419 3 00:06:05.419 09:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.419 09:53:03 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:05.419 09:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.419 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.419 4 00:06:05.419 09:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.419 09:53:03 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:05.419 09:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.419 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.419 5 00:06:05.419 09:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.419 09:53:03 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:05.419 09:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.419 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.419 6 00:06:05.419 09:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.419 09:53:03 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:05.419 09:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.419 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.419 7 00:06:05.419 09:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.419 09:53:03 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:05.419 09:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.419 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.419 8 00:06:05.419 09:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.419 09:53:03 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:05.419 09:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.419 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.419 9 00:06:05.419 09:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.419 09:53:03 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:05.419 09:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.419 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.419 10 00:06:05.419 09:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.419 09:53:03 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:05.419 09:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.419 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.419 09:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.419 09:53:03 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:05.419 09:53:03 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:05.419 09:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.419 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.419 09:53:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.419 09:53:03 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:05.419 09:53:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.419 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.794 09:53:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.794 09:53:05 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:06.794 09:53:05 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:06.794 09:53:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.794 09:53:05 -- common/autotest_common.sh@10 -- # set +x 00:06:08.168 09:53:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.168 00:06:08.168 real 0m2.613s 00:06:08.168 user 0m0.011s 00:06:08.168 sys 0m0.007s 00:06:08.168 09:53:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.168 ************************************ 00:06:08.168 END TEST scheduler_create_thread 00:06:08.168 ************************************ 00:06:08.168 09:53:06 -- common/autotest_common.sh@10 -- # set +x 00:06:08.168 09:53:06 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:08.168 09:53:06 -- scheduler/scheduler.sh@46 -- # killprocess 68804 00:06:08.168 09:53:06 -- common/autotest_common.sh@936 -- # '[' -z 68804 ']' 00:06:08.168 09:53:06 -- common/autotest_common.sh@940 -- # kill -0 68804 00:06:08.168 09:53:06 -- common/autotest_common.sh@941 -- # uname 00:06:08.168 09:53:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.168 09:53:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68804 00:06:08.168 09:53:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:08.168 09:53:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:08.168 killing process with pid 68804 00:06:08.169 09:53:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68804' 00:06:08.169 09:53:06 -- common/autotest_common.sh@955 -- # kill 68804 00:06:08.169 09:53:06 -- common/autotest_common.sh@960 -- # wait 68804 00:06:08.427 [2024-12-16 09:53:06.890611] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:08.685 00:06:08.685 real 0m3.894s 00:06:08.685 user 0m5.689s 00:06:08.685 sys 0m0.339s 00:06:08.685 09:53:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.685 09:53:07 -- common/autotest_common.sh@10 -- # set +x 00:06:08.685 ************************************ 00:06:08.685 END TEST event_scheduler 00:06:08.685 ************************************ 00:06:08.685 09:53:07 -- event/event.sh@51 -- # modprobe -n nbd 00:06:08.685 09:53:07 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:08.685 09:53:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.685 09:53:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.685 09:53:07 -- common/autotest_common.sh@10 -- # set +x 00:06:08.685 ************************************ 00:06:08.685 START TEST app_repeat 00:06:08.685 ************************************ 00:06:08.685 09:53:07 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:08.685 09:53:07 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.685 09:53:07 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.685 09:53:07 -- event/event.sh@13 -- # local nbd_list 00:06:08.685 09:53:07 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.685 09:53:07 -- event/event.sh@14 -- # local bdev_list 00:06:08.685 09:53:07 -- event/event.sh@15 -- # local repeat_times=4 00:06:08.685 09:53:07 -- event/event.sh@17 -- # modprobe nbd 00:06:08.685 09:53:07 -- event/event.sh@19 -- # repeat_pid=68908 00:06:08.685 09:53:07 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:08.685 09:53:07 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.685 Process app_repeat pid: 68908 00:06:08.685 09:53:07 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68908' 00:06:08.685 09:53:07 -- event/event.sh@23 -- # for i in {0..2} 00:06:08.685 spdk_app_start Round 0 00:06:08.685 09:53:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:08.685 09:53:07 -- event/event.sh@25 -- # waitforlisten 68908 /var/tmp/spdk-nbd.sock 00:06:08.685 09:53:07 -- common/autotest_common.sh@829 -- # '[' -z 68908 ']' 00:06:08.685 09:53:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.685 09:53:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.685 09:53:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.685 09:53:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.685 09:53:07 -- common/autotest_common.sh@10 -- # set +x 00:06:08.685 [2024-12-16 09:53:07.176003] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.685 [2024-12-16 09:53:07.176113] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68908 ] 00:06:08.957 [2024-12-16 09:53:07.309302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.957 [2024-12-16 09:53:07.374440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.957 [2024-12-16 09:53:07.374452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.535 09:53:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.535 09:53:08 -- common/autotest_common.sh@862 -- # return 0 00:06:09.535 09:53:08 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.794 Malloc0 00:06:09.794 09:53:08 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.052 Malloc1 00:06:10.052 09:53:08 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.052 09:53:08 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.052 09:53:08 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.052 09:53:08 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.052 09:53:08 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.052 09:53:08 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.052 09:53:08 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.052 09:53:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.052 09:53:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.052 09:53:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.052 09:53:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.052 09:53:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.052 09:53:08 -- bdev/nbd_common.sh@12 -- # local i 00:06:10.052 09:53:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.052 09:53:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.052 09:53:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.311 /dev/nbd0 00:06:10.311 09:53:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.311 09:53:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.311 09:53:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:10.311 09:53:08 -- common/autotest_common.sh@867 -- # local i 00:06:10.311 09:53:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:10.311 09:53:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:10.311 09:53:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:10.311 09:53:08 -- common/autotest_common.sh@871 -- # break 00:06:10.311 09:53:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:10.311 09:53:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:10.311 09:53:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.311 1+0 records in 00:06:10.311 1+0 records out 00:06:10.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312198 s, 13.1 MB/s 00:06:10.311 09:53:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.311 09:53:08 -- common/autotest_common.sh@884 -- # size=4096 00:06:10.311 09:53:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.311 09:53:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:10.311 09:53:08 -- common/autotest_common.sh@887 -- # return 0 00:06:10.311 09:53:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.311 09:53:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.311 09:53:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.878 /dev/nbd1 00:06:10.878 09:53:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.878 09:53:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.878 09:53:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:10.878 09:53:09 -- common/autotest_common.sh@867 -- # local i 00:06:10.878 09:53:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:10.878 09:53:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:10.878 09:53:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:10.878 09:53:09 -- common/autotest_common.sh@871 -- # break 00:06:10.878 09:53:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:10.878 09:53:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:10.878 09:53:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.878 1+0 records in 00:06:10.878 1+0 records out 00:06:10.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306375 s, 13.4 MB/s 00:06:10.878 09:53:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.878 09:53:09 -- common/autotest_common.sh@884 -- # size=4096 00:06:10.878 09:53:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.878 09:53:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:10.878 09:53:09 -- common/autotest_common.sh@887 -- # return 0 00:06:10.878 09:53:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.878 09:53:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.878 09:53:09 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.878 09:53:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.878 09:53:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.137 { 00:06:11.137 "bdev_name": "Malloc0", 00:06:11.137 "nbd_device": "/dev/nbd0" 00:06:11.137 }, 00:06:11.137 { 00:06:11.137 "bdev_name": "Malloc1", 00:06:11.137 "nbd_device": "/dev/nbd1" 00:06:11.137 } 00:06:11.137 ]' 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.137 { 00:06:11.137 "bdev_name": "Malloc0", 00:06:11.137 "nbd_device": "/dev/nbd0" 00:06:11.137 }, 00:06:11.137 { 00:06:11.137 "bdev_name": "Malloc1", 00:06:11.137 "nbd_device": "/dev/nbd1" 00:06:11.137 } 00:06:11.137 ]' 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.137 /dev/nbd1' 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.137 /dev/nbd1' 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.137 256+0 records in 00:06:11.137 256+0 records out 00:06:11.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00794598 s, 132 MB/s 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.137 256+0 records in 00:06:11.137 256+0 records out 00:06:11.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231398 s, 45.3 MB/s 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.137 256+0 records in 00:06:11.137 256+0 records out 00:06:11.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028241 s, 37.1 MB/s 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@51 -- # local i 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.137 09:53:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.396 09:53:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.396 09:53:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.396 09:53:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.396 09:53:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.396 09:53:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.396 09:53:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.396 09:53:09 -- bdev/nbd_common.sh@41 -- # break 00:06:11.396 09:53:09 -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.396 09:53:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.396 09:53:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@41 -- # break 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@65 -- # true 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.963 09:53:10 -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.964 09:53:10 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.222 09:53:10 -- event/event.sh@35 -- # sleep 3 00:06:12.481 [2024-12-16 09:53:10.978747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.481 [2024-12-16 09:53:11.023137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.481 [2024-12-16 09:53:11.023148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.481 [2024-12-16 09:53:11.081094] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.481 [2024-12-16 09:53:11.081177] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.765 spdk_app_start Round 1 00:06:15.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.765 09:53:13 -- event/event.sh@23 -- # for i in {0..2} 00:06:15.765 09:53:13 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:15.765 09:53:13 -- event/event.sh@25 -- # waitforlisten 68908 /var/tmp/spdk-nbd.sock 00:06:15.765 09:53:13 -- common/autotest_common.sh@829 -- # '[' -z 68908 ']' 00:06:15.765 09:53:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.765 09:53:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.765 09:53:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.765 09:53:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.765 09:53:13 -- common/autotest_common.sh@10 -- # set +x 00:06:15.765 09:53:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.765 09:53:14 -- common/autotest_common.sh@862 -- # return 0 00:06:15.765 09:53:14 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.765 Malloc0 00:06:15.765 09:53:14 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.024 Malloc1 00:06:16.024 09:53:14 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.024 09:53:14 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.024 09:53:14 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.024 09:53:14 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.024 09:53:14 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.024 09:53:14 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.024 09:53:14 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.024 09:53:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.024 09:53:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.024 09:53:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.024 09:53:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.024 09:53:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.024 09:53:14 -- bdev/nbd_common.sh@12 -- # local i 00:06:16.024 09:53:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.024 09:53:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.024 09:53:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.283 /dev/nbd0 00:06:16.283 09:53:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.283 09:53:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.283 09:53:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:16.283 09:53:14 -- common/autotest_common.sh@867 -- # local i 00:06:16.283 09:53:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:16.283 09:53:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:16.283 09:53:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:16.283 09:53:14 -- common/autotest_common.sh@871 -- # break 00:06:16.283 09:53:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:16.283 09:53:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:16.283 09:53:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.283 1+0 records in 00:06:16.283 1+0 records out 00:06:16.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020164 s, 20.3 MB/s 00:06:16.283 09:53:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.283 09:53:14 -- common/autotest_common.sh@884 -- # size=4096 00:06:16.283 09:53:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.283 09:53:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:16.283 09:53:14 -- common/autotest_common.sh@887 -- # return 0 00:06:16.283 09:53:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.283 09:53:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.283 09:53:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.542 /dev/nbd1 00:06:16.542 09:53:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.542 09:53:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.542 09:53:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:16.542 09:53:15 -- common/autotest_common.sh@867 -- # local i 00:06:16.542 09:53:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:16.542 09:53:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:16.542 09:53:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:16.542 09:53:15 -- common/autotest_common.sh@871 -- # break 00:06:16.542 09:53:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:16.542 09:53:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:16.542 09:53:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.542 1+0 records in 00:06:16.542 1+0 records out 00:06:16.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310131 s, 13.2 MB/s 00:06:16.542 09:53:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.542 09:53:15 -- common/autotest_common.sh@884 -- # size=4096 00:06:16.542 09:53:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.542 09:53:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:16.542 09:53:15 -- common/autotest_common.sh@887 -- # return 0 00:06:16.542 09:53:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.542 09:53:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.542 09:53:15 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.542 09:53:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.542 09:53:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.108 09:53:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.108 { 00:06:17.108 "bdev_name": "Malloc0", 00:06:17.108 "nbd_device": "/dev/nbd0" 00:06:17.108 }, 00:06:17.108 { 00:06:17.108 "bdev_name": "Malloc1", 00:06:17.108 "nbd_device": "/dev/nbd1" 00:06:17.108 } 00:06:17.108 ]' 00:06:17.108 09:53:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.108 09:53:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.108 { 00:06:17.108 "bdev_name": "Malloc0", 00:06:17.108 "nbd_device": "/dev/nbd0" 00:06:17.108 }, 00:06:17.108 { 00:06:17.108 "bdev_name": "Malloc1", 00:06:17.108 "nbd_device": "/dev/nbd1" 00:06:17.108 } 00:06:17.108 ]' 00:06:17.108 09:53:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.108 /dev/nbd1' 00:06:17.108 09:53:15 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.108 /dev/nbd1' 00:06:17.108 09:53:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.109 256+0 records in 00:06:17.109 256+0 records out 00:06:17.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00767657 s, 137 MB/s 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.109 256+0 records in 00:06:17.109 256+0 records out 00:06:17.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230512 s, 45.5 MB/s 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.109 256+0 records in 00:06:17.109 256+0 records out 00:06:17.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267332 s, 39.2 MB/s 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@51 -- # local i 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.109 09:53:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.368 09:53:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.368 09:53:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.368 09:53:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.368 09:53:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.368 09:53:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.368 09:53:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.368 09:53:15 -- bdev/nbd_common.sh@41 -- # break 00:06:17.368 09:53:15 -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.368 09:53:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.368 09:53:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.626 09:53:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.626 09:53:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.626 09:53:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.626 09:53:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.626 09:53:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.626 09:53:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.626 09:53:16 -- bdev/nbd_common.sh@41 -- # break 00:06:17.626 09:53:16 -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.626 09:53:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.626 09:53:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.626 09:53:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.885 09:53:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.885 09:53:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.885 09:53:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.885 09:53:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.885 09:53:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.885 09:53:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.885 09:53:16 -- bdev/nbd_common.sh@65 -- # true 00:06:17.885 09:53:16 -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.885 09:53:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.885 09:53:16 -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.885 09:53:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.885 09:53:16 -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.885 09:53:16 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.143 09:53:16 -- event/event.sh@35 -- # sleep 3 00:06:18.401 [2024-12-16 09:53:16.801017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.401 [2024-12-16 09:53:16.845407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.401 [2024-12-16 09:53:16.845417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.401 [2024-12-16 09:53:16.898979] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.401 [2024-12-16 09:53:16.899062] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.716 spdk_app_start Round 2 00:06:21.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.716 09:53:19 -- event/event.sh@23 -- # for i in {0..2} 00:06:21.716 09:53:19 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:21.716 09:53:19 -- event/event.sh@25 -- # waitforlisten 68908 /var/tmp/spdk-nbd.sock 00:06:21.716 09:53:19 -- common/autotest_common.sh@829 -- # '[' -z 68908 ']' 00:06:21.716 09:53:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.716 09:53:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.716 09:53:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.716 09:53:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.716 09:53:19 -- common/autotest_common.sh@10 -- # set +x 00:06:21.716 09:53:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.716 09:53:19 -- common/autotest_common.sh@862 -- # return 0 00:06:21.716 09:53:19 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.716 Malloc0 00:06:21.716 09:53:20 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.974 Malloc1 00:06:21.974 09:53:20 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.974 09:53:20 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.974 09:53:20 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.974 09:53:20 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.974 09:53:20 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.974 09:53:20 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.974 09:53:20 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.974 09:53:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.974 09:53:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.974 09:53:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.974 09:53:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.974 09:53:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.974 09:53:20 -- bdev/nbd_common.sh@12 -- # local i 00:06:21.974 09:53:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.974 09:53:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.974 09:53:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:22.233 /dev/nbd0 00:06:22.233 09:53:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.233 09:53:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.233 09:53:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:22.233 09:53:20 -- common/autotest_common.sh@867 -- # local i 00:06:22.233 09:53:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:22.233 09:53:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:22.233 09:53:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:22.233 09:53:20 -- common/autotest_common.sh@871 -- # break 00:06:22.233 09:53:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:22.233 09:53:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:22.233 09:53:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.233 1+0 records in 00:06:22.233 1+0 records out 00:06:22.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325807 s, 12.6 MB/s 00:06:22.233 09:53:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.233 09:53:20 -- common/autotest_common.sh@884 -- # size=4096 00:06:22.233 09:53:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.233 09:53:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:22.233 09:53:20 -- common/autotest_common.sh@887 -- # return 0 00:06:22.233 09:53:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.233 09:53:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.233 09:53:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.492 /dev/nbd1 00:06:22.750 09:53:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.750 09:53:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.750 09:53:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:22.750 09:53:21 -- common/autotest_common.sh@867 -- # local i 00:06:22.750 09:53:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:22.750 09:53:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:22.750 09:53:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:22.750 09:53:21 -- common/autotest_common.sh@871 -- # break 00:06:22.750 09:53:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:22.750 09:53:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:22.750 09:53:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.750 1+0 records in 00:06:22.750 1+0 records out 00:06:22.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289075 s, 14.2 MB/s 00:06:22.750 09:53:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.750 09:53:21 -- common/autotest_common.sh@884 -- # size=4096 00:06:22.750 09:53:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.750 09:53:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:22.750 09:53:21 -- common/autotest_common.sh@887 -- # return 0 00:06:22.750 09:53:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.750 09:53:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.750 09:53:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.750 09:53:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.750 09:53:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.008 09:53:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.008 { 00:06:23.008 "bdev_name": "Malloc0", 00:06:23.008 "nbd_device": "/dev/nbd0" 00:06:23.008 }, 00:06:23.008 { 00:06:23.008 "bdev_name": "Malloc1", 00:06:23.008 "nbd_device": "/dev/nbd1" 00:06:23.008 } 00:06:23.008 ]' 00:06:23.008 09:53:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.008 { 00:06:23.008 "bdev_name": "Malloc0", 00:06:23.008 "nbd_device": "/dev/nbd0" 00:06:23.008 }, 00:06:23.008 { 00:06:23.008 "bdev_name": "Malloc1", 00:06:23.008 "nbd_device": "/dev/nbd1" 00:06:23.008 } 00:06:23.008 ]' 00:06:23.008 09:53:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.008 09:53:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.008 /dev/nbd1' 00:06:23.008 09:53:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.008 09:53:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.008 /dev/nbd1' 00:06:23.008 09:53:21 -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.008 09:53:21 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.008 09:53:21 -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.008 09:53:21 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.008 09:53:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.008 09:53:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.008 09:53:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.008 09:53:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.009 256+0 records in 00:06:23.009 256+0 records out 00:06:23.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108248 s, 96.9 MB/s 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.009 256+0 records in 00:06:23.009 256+0 records out 00:06:23.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246661 s, 42.5 MB/s 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.009 256+0 records in 00:06:23.009 256+0 records out 00:06:23.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262984 s, 39.9 MB/s 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@51 -- # local i 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.009 09:53:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.267 09:53:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.267 09:53:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.267 09:53:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.267 09:53:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.267 09:53:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.267 09:53:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.267 09:53:21 -- bdev/nbd_common.sh@41 -- # break 00:06:23.267 09:53:21 -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.267 09:53:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.267 09:53:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.525 09:53:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.525 09:53:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.525 09:53:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.525 09:53:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.525 09:53:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.525 09:53:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.525 09:53:22 -- bdev/nbd_common.sh@41 -- # break 00:06:23.525 09:53:22 -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.525 09:53:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.525 09:53:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.525 09:53:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.783 09:53:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.783 09:53:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.783 09:53:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.783 09:53:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.783 09:53:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.784 09:53:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.784 09:53:22 -- bdev/nbd_common.sh@65 -- # true 00:06:23.784 09:53:22 -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.784 09:53:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.784 09:53:22 -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.784 09:53:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.784 09:53:22 -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.784 09:53:22 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.351 09:53:22 -- event/event.sh@35 -- # sleep 3 00:06:24.351 [2024-12-16 09:53:22.845241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.351 [2024-12-16 09:53:22.887817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.351 [2024-12-16 09:53:22.887827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.351 [2024-12-16 09:53:22.941666] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.351 [2024-12-16 09:53:22.941748] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.637 09:53:25 -- event/event.sh@38 -- # waitforlisten 68908 /var/tmp/spdk-nbd.sock 00:06:27.637 09:53:25 -- common/autotest_common.sh@829 -- # '[' -z 68908 ']' 00:06:27.637 09:53:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.637 09:53:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.637 09:53:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.637 09:53:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.637 09:53:25 -- common/autotest_common.sh@10 -- # set +x 00:06:27.637 09:53:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.637 09:53:25 -- common/autotest_common.sh@862 -- # return 0 00:06:27.637 09:53:25 -- event/event.sh@39 -- # killprocess 68908 00:06:27.637 09:53:25 -- common/autotest_common.sh@936 -- # '[' -z 68908 ']' 00:06:27.637 09:53:25 -- common/autotest_common.sh@940 -- # kill -0 68908 00:06:27.637 09:53:25 -- common/autotest_common.sh@941 -- # uname 00:06:27.637 09:53:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:27.637 09:53:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68908 00:06:27.637 killing process with pid 68908 00:06:27.637 09:53:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:27.637 09:53:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:27.637 09:53:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68908' 00:06:27.637 09:53:25 -- common/autotest_common.sh@955 -- # kill 68908 00:06:27.637 09:53:25 -- common/autotest_common.sh@960 -- # wait 68908 00:06:27.637 spdk_app_start is called in Round 0. 00:06:27.637 Shutdown signal received, stop current app iteration 00:06:27.637 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:27.637 spdk_app_start is called in Round 1. 00:06:27.637 Shutdown signal received, stop current app iteration 00:06:27.637 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:27.637 spdk_app_start is called in Round 2. 00:06:27.637 Shutdown signal received, stop current app iteration 00:06:27.637 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:27.637 spdk_app_start is called in Round 3. 00:06:27.637 Shutdown signal received, stop current app iteration 00:06:27.637 09:53:26 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:27.637 09:53:26 -- event/event.sh@42 -- # return 0 00:06:27.637 00:06:27.637 real 0m19.010s 00:06:27.637 user 0m42.930s 00:06:27.637 sys 0m2.928s 00:06:27.637 09:53:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.637 09:53:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.637 ************************************ 00:06:27.637 END TEST app_repeat 00:06:27.637 ************************************ 00:06:27.637 09:53:26 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:27.637 09:53:26 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:27.637 09:53:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.637 09:53:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.637 09:53:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.637 ************************************ 00:06:27.637 START TEST cpu_locks 00:06:27.637 ************************************ 00:06:27.637 09:53:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:27.896 * Looking for test storage... 00:06:27.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:27.896 09:53:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:27.896 09:53:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:27.896 09:53:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:27.896 09:53:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:27.896 09:53:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:27.896 09:53:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:27.896 09:53:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:27.896 09:53:26 -- scripts/common.sh@335 -- # IFS=.-: 00:06:27.896 09:53:26 -- scripts/common.sh@335 -- # read -ra ver1 00:06:27.896 09:53:26 -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.896 09:53:26 -- scripts/common.sh@336 -- # read -ra ver2 00:06:27.896 09:53:26 -- scripts/common.sh@337 -- # local 'op=<' 00:06:27.896 09:53:26 -- scripts/common.sh@339 -- # ver1_l=2 00:06:27.896 09:53:26 -- scripts/common.sh@340 -- # ver2_l=1 00:06:27.896 09:53:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:27.896 09:53:26 -- scripts/common.sh@343 -- # case "$op" in 00:06:27.896 09:53:26 -- scripts/common.sh@344 -- # : 1 00:06:27.896 09:53:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:27.896 09:53:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.896 09:53:26 -- scripts/common.sh@364 -- # decimal 1 00:06:27.896 09:53:26 -- scripts/common.sh@352 -- # local d=1 00:06:27.896 09:53:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.896 09:53:26 -- scripts/common.sh@354 -- # echo 1 00:06:27.896 09:53:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:27.896 09:53:26 -- scripts/common.sh@365 -- # decimal 2 00:06:27.896 09:53:26 -- scripts/common.sh@352 -- # local d=2 00:06:27.896 09:53:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.896 09:53:26 -- scripts/common.sh@354 -- # echo 2 00:06:27.896 09:53:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:27.896 09:53:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:27.896 09:53:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:27.896 09:53:26 -- scripts/common.sh@367 -- # return 0 00:06:27.896 09:53:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.896 09:53:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:27.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.896 --rc genhtml_branch_coverage=1 00:06:27.896 --rc genhtml_function_coverage=1 00:06:27.896 --rc genhtml_legend=1 00:06:27.896 --rc geninfo_all_blocks=1 00:06:27.896 --rc geninfo_unexecuted_blocks=1 00:06:27.896 00:06:27.896 ' 00:06:27.896 09:53:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:27.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.896 --rc genhtml_branch_coverage=1 00:06:27.896 --rc genhtml_function_coverage=1 00:06:27.896 --rc genhtml_legend=1 00:06:27.896 --rc geninfo_all_blocks=1 00:06:27.896 --rc geninfo_unexecuted_blocks=1 00:06:27.896 00:06:27.896 ' 00:06:27.896 09:53:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:27.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.896 --rc genhtml_branch_coverage=1 00:06:27.896 --rc genhtml_function_coverage=1 00:06:27.896 --rc genhtml_legend=1 00:06:27.896 --rc geninfo_all_blocks=1 00:06:27.896 --rc geninfo_unexecuted_blocks=1 00:06:27.896 00:06:27.896 ' 00:06:27.896 09:53:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:27.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.896 --rc genhtml_branch_coverage=1 00:06:27.896 --rc genhtml_function_coverage=1 00:06:27.896 --rc genhtml_legend=1 00:06:27.896 --rc geninfo_all_blocks=1 00:06:27.896 --rc geninfo_unexecuted_blocks=1 00:06:27.896 00:06:27.896 ' 00:06:27.896 09:53:26 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:27.896 09:53:26 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:27.896 09:53:26 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:27.896 09:53:26 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:27.896 09:53:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.896 09:53:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.896 09:53:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.896 ************************************ 00:06:27.896 START TEST default_locks 00:06:27.896 ************************************ 00:06:27.896 09:53:26 -- common/autotest_common.sh@1114 -- # default_locks 00:06:27.896 09:53:26 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69540 00:06:27.896 09:53:26 -- event/cpu_locks.sh@47 -- # waitforlisten 69540 00:06:27.896 09:53:26 -- common/autotest_common.sh@829 -- # '[' -z 69540 ']' 00:06:27.896 09:53:26 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.896 09:53:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.896 09:53:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.896 09:53:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.896 09:53:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.896 09:53:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.896 [2024-12-16 09:53:26.438142] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.896 [2024-12-16 09:53:26.438243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69540 ] 00:06:28.155 [2024-12-16 09:53:26.568217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.155 [2024-12-16 09:53:26.623967] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.155 [2024-12-16 09:53:26.624157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.090 09:53:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.090 09:53:27 -- common/autotest_common.sh@862 -- # return 0 00:06:29.090 09:53:27 -- event/cpu_locks.sh@49 -- # locks_exist 69540 00:06:29.090 09:53:27 -- event/cpu_locks.sh@22 -- # lslocks -p 69540 00:06:29.090 09:53:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.348 09:53:27 -- event/cpu_locks.sh@50 -- # killprocess 69540 00:06:29.348 09:53:27 -- common/autotest_common.sh@936 -- # '[' -z 69540 ']' 00:06:29.348 09:53:27 -- common/autotest_common.sh@940 -- # kill -0 69540 00:06:29.348 09:53:27 -- common/autotest_common.sh@941 -- # uname 00:06:29.348 09:53:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.348 09:53:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69540 00:06:29.348 09:53:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.348 09:53:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.348 killing process with pid 69540 00:06:29.348 09:53:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69540' 00:06:29.348 09:53:27 -- common/autotest_common.sh@955 -- # kill 69540 00:06:29.348 09:53:27 -- common/autotest_common.sh@960 -- # wait 69540 00:06:29.607 09:53:28 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69540 00:06:29.607 09:53:28 -- common/autotest_common.sh@650 -- # local es=0 00:06:29.607 09:53:28 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69540 00:06:29.607 09:53:28 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:29.607 09:53:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.607 09:53:28 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:29.607 09:53:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.607 09:53:28 -- common/autotest_common.sh@653 -- # waitforlisten 69540 00:06:29.607 09:53:28 -- common/autotest_common.sh@829 -- # '[' -z 69540 ']' 00:06:29.607 09:53:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.607 09:53:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.607 09:53:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.607 09:53:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.607 09:53:28 -- common/autotest_common.sh@10 -- # set +x 00:06:29.607 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69540) - No such process 00:06:29.607 ERROR: process (pid: 69540) is no longer running 00:06:29.607 09:53:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.607 09:53:28 -- common/autotest_common.sh@862 -- # return 1 00:06:29.607 09:53:28 -- common/autotest_common.sh@653 -- # es=1 00:06:29.607 09:53:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.607 09:53:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.607 09:53:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.607 09:53:28 -- event/cpu_locks.sh@54 -- # no_locks 00:06:29.607 09:53:28 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.607 09:53:28 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.607 09:53:28 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.607 00:06:29.607 real 0m1.843s 00:06:29.607 user 0m2.011s 00:06:29.607 sys 0m0.565s 00:06:29.607 09:53:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.607 ************************************ 00:06:29.607 09:53:28 -- common/autotest_common.sh@10 -- # set +x 00:06:29.607 END TEST default_locks 00:06:29.607 ************************************ 00:06:29.866 09:53:28 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:29.866 09:53:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.866 09:53:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.866 09:53:28 -- common/autotest_common.sh@10 -- # set +x 00:06:29.866 ************************************ 00:06:29.866 START TEST default_locks_via_rpc 00:06:29.866 ************************************ 00:06:29.866 09:53:28 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:29.866 09:53:28 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69604 00:06:29.866 09:53:28 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.866 09:53:28 -- event/cpu_locks.sh@63 -- # waitforlisten 69604 00:06:29.866 09:53:28 -- common/autotest_common.sh@829 -- # '[' -z 69604 ']' 00:06:29.866 09:53:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.866 09:53:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.866 09:53:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.866 09:53:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.866 09:53:28 -- common/autotest_common.sh@10 -- # set +x 00:06:29.866 [2024-12-16 09:53:28.328652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.866 [2024-12-16 09:53:28.329253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69604 ] 00:06:29.866 [2024-12-16 09:53:28.463153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.124 [2024-12-16 09:53:28.517523] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.124 [2024-12-16 09:53:28.518141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.691 09:53:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.691 09:53:29 -- common/autotest_common.sh@862 -- # return 0 00:06:30.691 09:53:29 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:30.691 09:53:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.691 09:53:29 -- common/autotest_common.sh@10 -- # set +x 00:06:30.963 09:53:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.963 09:53:29 -- event/cpu_locks.sh@67 -- # no_locks 00:06:30.963 09:53:29 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:30.963 09:53:29 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:30.963 09:53:29 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:30.963 09:53:29 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:30.963 09:53:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.963 09:53:29 -- common/autotest_common.sh@10 -- # set +x 00:06:30.963 09:53:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.963 09:53:29 -- event/cpu_locks.sh@71 -- # locks_exist 69604 00:06:30.963 09:53:29 -- event/cpu_locks.sh@22 -- # lslocks -p 69604 00:06:30.963 09:53:29 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.234 09:53:29 -- event/cpu_locks.sh@73 -- # killprocess 69604 00:06:31.234 09:53:29 -- common/autotest_common.sh@936 -- # '[' -z 69604 ']' 00:06:31.234 09:53:29 -- common/autotest_common.sh@940 -- # kill -0 69604 00:06:31.234 09:53:29 -- common/autotest_common.sh@941 -- # uname 00:06:31.234 09:53:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:31.234 09:53:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69604 00:06:31.234 09:53:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:31.234 09:53:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:31.234 09:53:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69604' 00:06:31.234 killing process with pid 69604 00:06:31.234 09:53:29 -- common/autotest_common.sh@955 -- # kill 69604 00:06:31.234 09:53:29 -- common/autotest_common.sh@960 -- # wait 69604 00:06:31.849 00:06:31.849 real 0m1.861s 00:06:31.849 user 0m2.057s 00:06:31.849 sys 0m0.527s 00:06:31.849 09:53:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.849 09:53:30 -- common/autotest_common.sh@10 -- # set +x 00:06:31.849 ************************************ 00:06:31.849 END TEST default_locks_via_rpc 00:06:31.850 ************************************ 00:06:31.850 09:53:30 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:31.850 09:53:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.850 09:53:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.850 09:53:30 -- common/autotest_common.sh@10 -- # set +x 00:06:31.850 ************************************ 00:06:31.850 START TEST non_locking_app_on_locked_coremask 00:06:31.850 ************************************ 00:06:31.850 09:53:30 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:31.850 09:53:30 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69667 00:06:31.850 09:53:30 -- event/cpu_locks.sh@81 -- # waitforlisten 69667 /var/tmp/spdk.sock 00:06:31.850 09:53:30 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.850 09:53:30 -- common/autotest_common.sh@829 -- # '[' -z 69667 ']' 00:06:31.850 09:53:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.850 09:53:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.850 09:53:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.850 09:53:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.850 09:53:30 -- common/autotest_common.sh@10 -- # set +x 00:06:31.850 [2024-12-16 09:53:30.246599] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.850 [2024-12-16 09:53:30.246692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69667 ] 00:06:31.850 [2024-12-16 09:53:30.385063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.850 [2024-12-16 09:53:30.445058] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.850 [2024-12-16 09:53:30.445221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.785 09:53:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.785 09:53:31 -- common/autotest_common.sh@862 -- # return 0 00:06:32.785 09:53:31 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69695 00:06:32.785 09:53:31 -- event/cpu_locks.sh@85 -- # waitforlisten 69695 /var/tmp/spdk2.sock 00:06:32.785 09:53:31 -- common/autotest_common.sh@829 -- # '[' -z 69695 ']' 00:06:32.785 09:53:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.785 09:53:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.785 09:53:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.785 09:53:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.785 09:53:31 -- common/autotest_common.sh@10 -- # set +x 00:06:32.785 09:53:31 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:32.785 [2024-12-16 09:53:31.292900] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.785 [2024-12-16 09:53:31.293806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69695 ] 00:06:33.044 [2024-12-16 09:53:31.435296] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.044 [2024-12-16 09:53:31.435339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.044 [2024-12-16 09:53:31.562832] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.044 [2024-12-16 09:53:31.562967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.611 09:53:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.611 09:53:32 -- common/autotest_common.sh@862 -- # return 0 00:06:33.611 09:53:32 -- event/cpu_locks.sh@87 -- # locks_exist 69667 00:06:33.611 09:53:32 -- event/cpu_locks.sh@22 -- # lslocks -p 69667 00:06:33.611 09:53:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.178 09:53:32 -- event/cpu_locks.sh@89 -- # killprocess 69667 00:06:34.178 09:53:32 -- common/autotest_common.sh@936 -- # '[' -z 69667 ']' 00:06:34.178 09:53:32 -- common/autotest_common.sh@940 -- # kill -0 69667 00:06:34.178 09:53:32 -- common/autotest_common.sh@941 -- # uname 00:06:34.178 09:53:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:34.178 09:53:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69667 00:06:34.178 09:53:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:34.178 killing process with pid 69667 00:06:34.178 09:53:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:34.179 09:53:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69667' 00:06:34.179 09:53:32 -- common/autotest_common.sh@955 -- # kill 69667 00:06:34.179 09:53:32 -- common/autotest_common.sh@960 -- # wait 69667 00:06:35.115 09:53:33 -- event/cpu_locks.sh@90 -- # killprocess 69695 00:06:35.115 09:53:33 -- common/autotest_common.sh@936 -- # '[' -z 69695 ']' 00:06:35.115 09:53:33 -- common/autotest_common.sh@940 -- # kill -0 69695 00:06:35.115 09:53:33 -- common/autotest_common.sh@941 -- # uname 00:06:35.115 09:53:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:35.115 09:53:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69695 00:06:35.115 09:53:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:35.115 09:53:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:35.115 killing process with pid 69695 00:06:35.115 09:53:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69695' 00:06:35.115 09:53:33 -- common/autotest_common.sh@955 -- # kill 69695 00:06:35.115 09:53:33 -- common/autotest_common.sh@960 -- # wait 69695 00:06:35.373 00:06:35.373 real 0m3.614s 00:06:35.373 user 0m4.018s 00:06:35.373 sys 0m0.961s 00:06:35.373 09:53:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.373 09:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:35.373 ************************************ 00:06:35.373 END TEST non_locking_app_on_locked_coremask 00:06:35.373 ************************************ 00:06:35.373 09:53:33 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:35.373 09:53:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.373 09:53:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.373 09:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:35.373 ************************************ 00:06:35.373 START TEST locking_app_on_unlocked_coremask 00:06:35.373 ************************************ 00:06:35.373 09:53:33 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:35.373 09:53:33 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69769 00:06:35.373 09:53:33 -- event/cpu_locks.sh@99 -- # waitforlisten 69769 /var/tmp/spdk.sock 00:06:35.373 09:53:33 -- common/autotest_common.sh@829 -- # '[' -z 69769 ']' 00:06:35.373 09:53:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.373 09:53:33 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:35.373 09:53:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.373 09:53:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.373 09:53:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.373 09:53:33 -- common/autotest_common.sh@10 -- # set +x 00:06:35.373 [2024-12-16 09:53:33.911438] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.373 [2024-12-16 09:53:33.911534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69769 ] 00:06:35.632 [2024-12-16 09:53:34.049325] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.632 [2024-12-16 09:53:34.049383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.632 [2024-12-16 09:53:34.107314] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.632 [2024-12-16 09:53:34.107524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.568 09:53:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.568 09:53:34 -- common/autotest_common.sh@862 -- # return 0 00:06:36.568 09:53:34 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69797 00:06:36.568 09:53:34 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.568 09:53:34 -- event/cpu_locks.sh@103 -- # waitforlisten 69797 /var/tmp/spdk2.sock 00:06:36.568 09:53:34 -- common/autotest_common.sh@829 -- # '[' -z 69797 ']' 00:06:36.568 09:53:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.568 09:53:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.568 09:53:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.568 09:53:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.568 09:53:34 -- common/autotest_common.sh@10 -- # set +x 00:06:36.568 [2024-12-16 09:53:34.899507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.568 [2024-12-16 09:53:34.899610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69797 ] 00:06:36.568 [2024-12-16 09:53:35.040237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.568 [2024-12-16 09:53:35.153766] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.568 [2024-12-16 09:53:35.153919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.503 09:53:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.503 09:53:35 -- common/autotest_common.sh@862 -- # return 0 00:06:37.503 09:53:35 -- event/cpu_locks.sh@105 -- # locks_exist 69797 00:06:37.503 09:53:35 -- event/cpu_locks.sh@22 -- # lslocks -p 69797 00:06:37.503 09:53:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.070 09:53:36 -- event/cpu_locks.sh@107 -- # killprocess 69769 00:06:38.070 09:53:36 -- common/autotest_common.sh@936 -- # '[' -z 69769 ']' 00:06:38.070 09:53:36 -- common/autotest_common.sh@940 -- # kill -0 69769 00:06:38.070 09:53:36 -- common/autotest_common.sh@941 -- # uname 00:06:38.070 09:53:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:38.070 09:53:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69769 00:06:38.070 09:53:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:38.070 09:53:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:38.070 killing process with pid 69769 00:06:38.070 09:53:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69769' 00:06:38.070 09:53:36 -- common/autotest_common.sh@955 -- # kill 69769 00:06:38.070 09:53:36 -- common/autotest_common.sh@960 -- # wait 69769 00:06:39.006 09:53:37 -- event/cpu_locks.sh@108 -- # killprocess 69797 00:06:39.006 09:53:37 -- common/autotest_common.sh@936 -- # '[' -z 69797 ']' 00:06:39.006 09:53:37 -- common/autotest_common.sh@940 -- # kill -0 69797 00:06:39.006 09:53:37 -- common/autotest_common.sh@941 -- # uname 00:06:39.006 09:53:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:39.006 09:53:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69797 00:06:39.006 09:53:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:39.006 09:53:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:39.006 killing process with pid 69797 00:06:39.006 09:53:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69797' 00:06:39.006 09:53:37 -- common/autotest_common.sh@955 -- # kill 69797 00:06:39.006 09:53:37 -- common/autotest_common.sh@960 -- # wait 69797 00:06:39.265 00:06:39.265 real 0m3.894s 00:06:39.265 user 0m4.362s 00:06:39.265 sys 0m1.082s 00:06:39.265 09:53:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.265 09:53:37 -- common/autotest_common.sh@10 -- # set +x 00:06:39.265 ************************************ 00:06:39.265 END TEST locking_app_on_unlocked_coremask 00:06:39.265 ************************************ 00:06:39.265 09:53:37 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:39.265 09:53:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.265 09:53:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.265 09:53:37 -- common/autotest_common.sh@10 -- # set +x 00:06:39.265 ************************************ 00:06:39.265 START TEST locking_app_on_locked_coremask 00:06:39.265 ************************************ 00:06:39.265 09:53:37 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:39.265 09:53:37 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69876 00:06:39.265 09:53:37 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.265 09:53:37 -- event/cpu_locks.sh@116 -- # waitforlisten 69876 /var/tmp/spdk.sock 00:06:39.265 09:53:37 -- common/autotest_common.sh@829 -- # '[' -z 69876 ']' 00:06:39.265 09:53:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.266 09:53:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.266 09:53:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.266 09:53:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.266 09:53:37 -- common/autotest_common.sh@10 -- # set +x 00:06:39.266 [2024-12-16 09:53:37.863846] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.266 [2024-12-16 09:53:37.863945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69876 ] 00:06:39.525 [2024-12-16 09:53:38.001468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.525 [2024-12-16 09:53:38.066113] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.525 [2024-12-16 09:53:38.066250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.460 09:53:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.460 09:53:38 -- common/autotest_common.sh@862 -- # return 0 00:06:40.460 09:53:38 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69904 00:06:40.460 09:53:38 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.460 09:53:38 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69904 /var/tmp/spdk2.sock 00:06:40.460 09:53:38 -- common/autotest_common.sh@650 -- # local es=0 00:06:40.460 09:53:38 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69904 /var/tmp/spdk2.sock 00:06:40.460 09:53:38 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:40.460 09:53:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.460 09:53:38 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:40.460 09:53:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.460 09:53:38 -- common/autotest_common.sh@653 -- # waitforlisten 69904 /var/tmp/spdk2.sock 00:06:40.460 09:53:38 -- common/autotest_common.sh@829 -- # '[' -z 69904 ']' 00:06:40.460 09:53:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.460 09:53:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.460 09:53:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.460 09:53:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.460 09:53:38 -- common/autotest_common.sh@10 -- # set +x 00:06:40.460 [2024-12-16 09:53:38.864326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.460 [2024-12-16 09:53:38.864604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69904 ] 00:06:40.460 [2024-12-16 09:53:38.997315] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69876 has claimed it. 00:06:40.460 [2024-12-16 09:53:38.997410] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.028 ERROR: process (pid: 69904) is no longer running 00:06:41.028 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69904) - No such process 00:06:41.028 09:53:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.028 09:53:39 -- common/autotest_common.sh@862 -- # return 1 00:06:41.028 09:53:39 -- common/autotest_common.sh@653 -- # es=1 00:06:41.028 09:53:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.028 09:53:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.028 09:53:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.028 09:53:39 -- event/cpu_locks.sh@122 -- # locks_exist 69876 00:06:41.028 09:53:39 -- event/cpu_locks.sh@22 -- # lslocks -p 69876 00:06:41.028 09:53:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.606 09:53:39 -- event/cpu_locks.sh@124 -- # killprocess 69876 00:06:41.606 09:53:39 -- common/autotest_common.sh@936 -- # '[' -z 69876 ']' 00:06:41.606 09:53:39 -- common/autotest_common.sh@940 -- # kill -0 69876 00:06:41.606 09:53:39 -- common/autotest_common.sh@941 -- # uname 00:06:41.606 09:53:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:41.606 09:53:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69876 00:06:41.606 killing process with pid 69876 00:06:41.606 09:53:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:41.606 09:53:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:41.606 09:53:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69876' 00:06:41.606 09:53:39 -- common/autotest_common.sh@955 -- # kill 69876 00:06:41.606 09:53:39 -- common/autotest_common.sh@960 -- # wait 69876 00:06:41.879 00:06:41.879 real 0m2.516s 00:06:41.879 user 0m2.873s 00:06:41.879 sys 0m0.612s 00:06:41.879 09:53:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.879 ************************************ 00:06:41.879 END TEST locking_app_on_locked_coremask 00:06:41.879 ************************************ 00:06:41.879 09:53:40 -- common/autotest_common.sh@10 -- # set +x 00:06:41.879 09:53:40 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:41.879 09:53:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.879 09:53:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.879 09:53:40 -- common/autotest_common.sh@10 -- # set +x 00:06:41.879 ************************************ 00:06:41.879 START TEST locking_overlapped_coremask 00:06:41.879 ************************************ 00:06:41.879 09:53:40 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:41.879 09:53:40 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69955 00:06:41.879 09:53:40 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:41.879 09:53:40 -- event/cpu_locks.sh@133 -- # waitforlisten 69955 /var/tmp/spdk.sock 00:06:41.879 09:53:40 -- common/autotest_common.sh@829 -- # '[' -z 69955 ']' 00:06:41.879 09:53:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.879 09:53:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.879 09:53:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.879 09:53:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.879 09:53:40 -- common/autotest_common.sh@10 -- # set +x 00:06:41.879 [2024-12-16 09:53:40.419120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.879 [2024-12-16 09:53:40.419403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69955 ] 00:06:42.137 [2024-12-16 09:53:40.547922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.137 [2024-12-16 09:53:40.605929] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.137 [2024-12-16 09:53:40.606588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.137 [2024-12-16 09:53:40.606740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.137 [2024-12-16 09:53:40.606744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.073 09:53:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.073 09:53:41 -- common/autotest_common.sh@862 -- # return 0 00:06:43.073 09:53:41 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69985 00:06:43.073 09:53:41 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:43.073 09:53:41 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69985 /var/tmp/spdk2.sock 00:06:43.073 09:53:41 -- common/autotest_common.sh@650 -- # local es=0 00:06:43.073 09:53:41 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69985 /var/tmp/spdk2.sock 00:06:43.073 09:53:41 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:43.073 09:53:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.073 09:53:41 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:43.073 09:53:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.073 09:53:41 -- common/autotest_common.sh@653 -- # waitforlisten 69985 /var/tmp/spdk2.sock 00:06:43.073 09:53:41 -- common/autotest_common.sh@829 -- # '[' -z 69985 ']' 00:06:43.073 09:53:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.073 09:53:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.073 09:53:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.073 09:53:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.073 09:53:41 -- common/autotest_common.sh@10 -- # set +x 00:06:43.073 [2024-12-16 09:53:41.470006] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.074 [2024-12-16 09:53:41.470089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69985 ] 00:06:43.074 [2024-12-16 09:53:41.610552] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69955 has claimed it. 00:06:43.074 [2024-12-16 09:53:41.610616] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.641 ERROR: process (pid: 69985) is no longer running 00:06:43.641 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69985) - No such process 00:06:43.641 09:53:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.641 09:53:42 -- common/autotest_common.sh@862 -- # return 1 00:06:43.641 09:53:42 -- common/autotest_common.sh@653 -- # es=1 00:06:43.641 09:53:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.641 09:53:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:43.641 09:53:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.641 09:53:42 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:43.641 09:53:42 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:43.641 09:53:42 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:43.641 09:53:42 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:43.641 09:53:42 -- event/cpu_locks.sh@141 -- # killprocess 69955 00:06:43.641 09:53:42 -- common/autotest_common.sh@936 -- # '[' -z 69955 ']' 00:06:43.641 09:53:42 -- common/autotest_common.sh@940 -- # kill -0 69955 00:06:43.641 09:53:42 -- common/autotest_common.sh@941 -- # uname 00:06:43.641 09:53:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:43.641 09:53:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69955 00:06:43.641 09:53:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:43.641 09:53:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:43.641 09:53:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69955' 00:06:43.641 killing process with pid 69955 00:06:43.641 09:53:42 -- common/autotest_common.sh@955 -- # kill 69955 00:06:43.641 09:53:42 -- common/autotest_common.sh@960 -- # wait 69955 00:06:44.208 ************************************ 00:06:44.208 END TEST locking_overlapped_coremask 00:06:44.208 ************************************ 00:06:44.208 00:06:44.208 real 0m2.232s 00:06:44.208 user 0m6.415s 00:06:44.208 sys 0m0.456s 00:06:44.208 09:53:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.208 09:53:42 -- common/autotest_common.sh@10 -- # set +x 00:06:44.208 09:53:42 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:44.208 09:53:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:44.208 09:53:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.208 09:53:42 -- common/autotest_common.sh@10 -- # set +x 00:06:44.208 ************************************ 00:06:44.208 START TEST locking_overlapped_coremask_via_rpc 00:06:44.208 ************************************ 00:06:44.208 09:53:42 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:44.208 09:53:42 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70037 00:06:44.208 09:53:42 -- event/cpu_locks.sh@149 -- # waitforlisten 70037 /var/tmp/spdk.sock 00:06:44.208 09:53:42 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:44.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.209 09:53:42 -- common/autotest_common.sh@829 -- # '[' -z 70037 ']' 00:06:44.209 09:53:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.209 09:53:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.209 09:53:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.209 09:53:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.209 09:53:42 -- common/autotest_common.sh@10 -- # set +x 00:06:44.209 [2024-12-16 09:53:42.723032] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.209 [2024-12-16 09:53:42.723154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70037 ] 00:06:44.468 [2024-12-16 09:53:42.860996] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.468 [2024-12-16 09:53:42.861032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.468 [2024-12-16 09:53:42.919584] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.468 [2024-12-16 09:53:42.920244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.468 [2024-12-16 09:53:42.920410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.468 [2024-12-16 09:53:42.920411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.405 09:53:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.405 09:53:43 -- common/autotest_common.sh@862 -- # return 0 00:06:45.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.405 09:53:43 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70067 00:06:45.405 09:53:43 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:45.405 09:53:43 -- event/cpu_locks.sh@153 -- # waitforlisten 70067 /var/tmp/spdk2.sock 00:06:45.405 09:53:43 -- common/autotest_common.sh@829 -- # '[' -z 70067 ']' 00:06:45.405 09:53:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.405 09:53:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.405 09:53:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.405 09:53:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.405 09:53:43 -- common/autotest_common.sh@10 -- # set +x 00:06:45.405 [2024-12-16 09:53:43.763714] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.405 [2024-12-16 09:53:43.764216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70067 ] 00:06:45.405 [2024-12-16 09:53:43.906853] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.405 [2024-12-16 09:53:43.906905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.664 [2024-12-16 09:53:44.033794] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.664 [2024-12-16 09:53:44.034586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.664 [2024-12-16 09:53:44.035562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.664 [2024-12-16 09:53:44.035563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:46.231 09:53:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.231 09:53:44 -- common/autotest_common.sh@862 -- # return 0 00:06:46.231 09:53:44 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.231 09:53:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.231 09:53:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.231 09:53:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.231 09:53:44 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.231 09:53:44 -- common/autotest_common.sh@650 -- # local es=0 00:06:46.231 09:53:44 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.231 09:53:44 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:46.231 09:53:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.231 09:53:44 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:46.231 09:53:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.231 09:53:44 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.231 09:53:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.231 09:53:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.231 [2024-12-16 09:53:44.778542] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70037 has claimed it. 00:06:46.231 2024/12/16 09:53:44 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:46.231 request: 00:06:46.231 { 00:06:46.231 "method": "framework_enable_cpumask_locks", 00:06:46.231 "params": {} 00:06:46.231 } 00:06:46.231 Got JSON-RPC error response 00:06:46.231 GoRPCClient: error on JSON-RPC call 00:06:46.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.231 09:53:44 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:46.231 09:53:44 -- common/autotest_common.sh@653 -- # es=1 00:06:46.231 09:53:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.231 09:53:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:46.231 09:53:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.231 09:53:44 -- event/cpu_locks.sh@158 -- # waitforlisten 70037 /var/tmp/spdk.sock 00:06:46.231 09:53:44 -- common/autotest_common.sh@829 -- # '[' -z 70037 ']' 00:06:46.231 09:53:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.231 09:53:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.231 09:53:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.231 09:53:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.231 09:53:44 -- common/autotest_common.sh@10 -- # set +x 00:06:46.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.491 09:53:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.491 09:53:45 -- common/autotest_common.sh@862 -- # return 0 00:06:46.491 09:53:45 -- event/cpu_locks.sh@159 -- # waitforlisten 70067 /var/tmp/spdk2.sock 00:06:46.491 09:53:45 -- common/autotest_common.sh@829 -- # '[' -z 70067 ']' 00:06:46.491 09:53:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.491 09:53:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.491 09:53:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.491 09:53:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.491 09:53:45 -- common/autotest_common.sh@10 -- # set +x 00:06:46.749 ************************************ 00:06:46.749 END TEST locking_overlapped_coremask_via_rpc 00:06:46.749 ************************************ 00:06:46.749 09:53:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.749 09:53:45 -- common/autotest_common.sh@862 -- # return 0 00:06:46.749 09:53:45 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:46.749 09:53:45 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.749 09:53:45 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.749 09:53:45 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.749 00:06:46.749 real 0m2.677s 00:06:46.749 user 0m1.411s 00:06:46.749 sys 0m0.202s 00:06:46.749 09:53:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.749 09:53:45 -- common/autotest_common.sh@10 -- # set +x 00:06:47.009 09:53:45 -- event/cpu_locks.sh@174 -- # cleanup 00:06:47.009 09:53:45 -- event/cpu_locks.sh@15 -- # [[ -z 70037 ]] 00:06:47.009 09:53:45 -- event/cpu_locks.sh@15 -- # killprocess 70037 00:06:47.009 09:53:45 -- common/autotest_common.sh@936 -- # '[' -z 70037 ']' 00:06:47.009 09:53:45 -- common/autotest_common.sh@940 -- # kill -0 70037 00:06:47.009 09:53:45 -- common/autotest_common.sh@941 -- # uname 00:06:47.009 09:53:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:47.009 09:53:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70037 00:06:47.009 killing process with pid 70037 00:06:47.009 09:53:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:47.009 09:53:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:47.009 09:53:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70037' 00:06:47.009 09:53:45 -- common/autotest_common.sh@955 -- # kill 70037 00:06:47.009 09:53:45 -- common/autotest_common.sh@960 -- # wait 70037 00:06:47.268 09:53:45 -- event/cpu_locks.sh@16 -- # [[ -z 70067 ]] 00:06:47.268 09:53:45 -- event/cpu_locks.sh@16 -- # killprocess 70067 00:06:47.268 09:53:45 -- common/autotest_common.sh@936 -- # '[' -z 70067 ']' 00:06:47.268 09:53:45 -- common/autotest_common.sh@940 -- # kill -0 70067 00:06:47.268 09:53:45 -- common/autotest_common.sh@941 -- # uname 00:06:47.268 09:53:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:47.268 09:53:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70067 00:06:47.268 killing process with pid 70067 00:06:47.268 09:53:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:47.268 09:53:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:47.268 09:53:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70067' 00:06:47.268 09:53:45 -- common/autotest_common.sh@955 -- # kill 70067 00:06:47.268 09:53:45 -- common/autotest_common.sh@960 -- # wait 70067 00:06:47.836 09:53:46 -- event/cpu_locks.sh@18 -- # rm -f 00:06:47.836 Process with pid 70037 is not found 00:06:47.836 Process with pid 70067 is not found 00:06:47.836 09:53:46 -- event/cpu_locks.sh@1 -- # cleanup 00:06:47.836 09:53:46 -- event/cpu_locks.sh@15 -- # [[ -z 70037 ]] 00:06:47.836 09:53:46 -- event/cpu_locks.sh@15 -- # killprocess 70037 00:06:47.836 09:53:46 -- common/autotest_common.sh@936 -- # '[' -z 70037 ']' 00:06:47.836 09:53:46 -- common/autotest_common.sh@940 -- # kill -0 70037 00:06:47.836 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70037) - No such process 00:06:47.836 09:53:46 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70037 is not found' 00:06:47.836 09:53:46 -- event/cpu_locks.sh@16 -- # [[ -z 70067 ]] 00:06:47.836 09:53:46 -- event/cpu_locks.sh@16 -- # killprocess 70067 00:06:47.836 09:53:46 -- common/autotest_common.sh@936 -- # '[' -z 70067 ']' 00:06:47.836 09:53:46 -- common/autotest_common.sh@940 -- # kill -0 70067 00:06:47.836 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70067) - No such process 00:06:47.836 09:53:46 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70067 is not found' 00:06:47.836 09:53:46 -- event/cpu_locks.sh@18 -- # rm -f 00:06:47.836 ************************************ 00:06:47.836 END TEST cpu_locks 00:06:47.836 ************************************ 00:06:47.836 00:06:47.836 real 0m19.954s 00:06:47.836 user 0m35.931s 00:06:47.836 sys 0m5.285s 00:06:47.836 09:53:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.836 09:53:46 -- common/autotest_common.sh@10 -- # set +x 00:06:47.836 ************************************ 00:06:47.836 END TEST event 00:06:47.836 ************************************ 00:06:47.836 00:06:47.836 real 0m47.152s 00:06:47.836 user 1m31.027s 00:06:47.836 sys 0m9.002s 00:06:47.836 09:53:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.836 09:53:46 -- common/autotest_common.sh@10 -- # set +x 00:06:47.836 09:53:46 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:47.836 09:53:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.836 09:53:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.836 09:53:46 -- common/autotest_common.sh@10 -- # set +x 00:06:47.836 ************************************ 00:06:47.836 START TEST thread 00:06:47.836 ************************************ 00:06:47.836 09:53:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:47.836 * Looking for test storage... 00:06:47.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:47.836 09:53:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:47.836 09:53:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:47.836 09:53:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:47.836 09:53:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:47.836 09:53:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:47.836 09:53:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:47.836 09:53:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:47.836 09:53:46 -- scripts/common.sh@335 -- # IFS=.-: 00:06:47.836 09:53:46 -- scripts/common.sh@335 -- # read -ra ver1 00:06:47.836 09:53:46 -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.836 09:53:46 -- scripts/common.sh@336 -- # read -ra ver2 00:06:47.836 09:53:46 -- scripts/common.sh@337 -- # local 'op=<' 00:06:47.836 09:53:46 -- scripts/common.sh@339 -- # ver1_l=2 00:06:47.836 09:53:46 -- scripts/common.sh@340 -- # ver2_l=1 00:06:47.836 09:53:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:47.836 09:53:46 -- scripts/common.sh@343 -- # case "$op" in 00:06:47.836 09:53:46 -- scripts/common.sh@344 -- # : 1 00:06:47.836 09:53:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:47.836 09:53:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.836 09:53:46 -- scripts/common.sh@364 -- # decimal 1 00:06:47.836 09:53:46 -- scripts/common.sh@352 -- # local d=1 00:06:47.836 09:53:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.836 09:53:46 -- scripts/common.sh@354 -- # echo 1 00:06:47.836 09:53:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:47.836 09:53:46 -- scripts/common.sh@365 -- # decimal 2 00:06:47.836 09:53:46 -- scripts/common.sh@352 -- # local d=2 00:06:47.836 09:53:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.836 09:53:46 -- scripts/common.sh@354 -- # echo 2 00:06:47.836 09:53:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:47.836 09:53:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:47.836 09:53:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:47.836 09:53:46 -- scripts/common.sh@367 -- # return 0 00:06:47.836 09:53:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.836 09:53:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:47.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.836 --rc genhtml_branch_coverage=1 00:06:47.836 --rc genhtml_function_coverage=1 00:06:47.836 --rc genhtml_legend=1 00:06:47.836 --rc geninfo_all_blocks=1 00:06:47.836 --rc geninfo_unexecuted_blocks=1 00:06:47.836 00:06:47.836 ' 00:06:47.836 09:53:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:47.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.836 --rc genhtml_branch_coverage=1 00:06:47.836 --rc genhtml_function_coverage=1 00:06:47.836 --rc genhtml_legend=1 00:06:47.836 --rc geninfo_all_blocks=1 00:06:47.836 --rc geninfo_unexecuted_blocks=1 00:06:47.836 00:06:47.836 ' 00:06:47.836 09:53:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:47.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.836 --rc genhtml_branch_coverage=1 00:06:47.836 --rc genhtml_function_coverage=1 00:06:47.836 --rc genhtml_legend=1 00:06:47.836 --rc geninfo_all_blocks=1 00:06:47.836 --rc geninfo_unexecuted_blocks=1 00:06:47.836 00:06:47.836 ' 00:06:47.836 09:53:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:47.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.836 --rc genhtml_branch_coverage=1 00:06:47.836 --rc genhtml_function_coverage=1 00:06:47.836 --rc genhtml_legend=1 00:06:47.836 --rc geninfo_all_blocks=1 00:06:47.836 --rc geninfo_unexecuted_blocks=1 00:06:47.836 00:06:47.836 ' 00:06:47.836 09:53:46 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:47.836 09:53:46 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:47.836 09:53:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.836 09:53:46 -- common/autotest_common.sh@10 -- # set +x 00:06:48.095 ************************************ 00:06:48.095 START TEST thread_poller_perf 00:06:48.095 ************************************ 00:06:48.095 09:53:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:48.095 [2024-12-16 09:53:46.477762] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.095 [2024-12-16 09:53:46.477870] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70219 ] 00:06:48.095 [2024-12-16 09:53:46.606735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.095 [2024-12-16 09:53:46.661241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.095 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:49.471 [2024-12-16T09:53:48.096Z] ====================================== 00:06:49.471 [2024-12-16T09:53:48.096Z] busy:2210249230 (cyc) 00:06:49.471 [2024-12-16T09:53:48.096Z] total_run_count: 386000 00:06:49.471 [2024-12-16T09:53:48.096Z] tsc_hz: 2200000000 (cyc) 00:06:49.471 [2024-12-16T09:53:48.096Z] ====================================== 00:06:49.471 [2024-12-16T09:53:48.096Z] poller_cost: 5726 (cyc), 2602 (nsec) 00:06:49.471 00:06:49.471 real 0m1.261s 00:06:49.471 user 0m1.095s 00:06:49.471 sys 0m0.058s 00:06:49.471 09:53:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.471 ************************************ 00:06:49.471 END TEST thread_poller_perf 00:06:49.471 ************************************ 00:06:49.471 09:53:47 -- common/autotest_common.sh@10 -- # set +x 00:06:49.471 09:53:47 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.471 09:53:47 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:49.471 09:53:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.471 09:53:47 -- common/autotest_common.sh@10 -- # set +x 00:06:49.471 ************************************ 00:06:49.471 START TEST thread_poller_perf 00:06:49.471 ************************************ 00:06:49.471 09:53:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.471 [2024-12-16 09:53:47.793625] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.471 [2024-12-16 09:53:47.793718] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70258 ] 00:06:49.471 [2024-12-16 09:53:47.929499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.472 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:49.472 [2024-12-16 09:53:47.981839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.408 [2024-12-16T09:53:49.033Z] ====================================== 00:06:50.408 [2024-12-16T09:53:49.033Z] busy:2202528308 (cyc) 00:06:50.408 [2024-12-16T09:53:49.033Z] total_run_count: 5318000 00:06:50.408 [2024-12-16T09:53:49.033Z] tsc_hz: 2200000000 (cyc) 00:06:50.408 [2024-12-16T09:53:49.033Z] ====================================== 00:06:50.408 [2024-12-16T09:53:49.033Z] poller_cost: 414 (cyc), 188 (nsec) 00:06:50.667 ************************************ 00:06:50.667 END TEST thread_poller_perf 00:06:50.667 ************************************ 00:06:50.667 00:06:50.667 real 0m1.260s 00:06:50.667 user 0m1.099s 00:06:50.667 sys 0m0.055s 00:06:50.667 09:53:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.667 09:53:49 -- common/autotest_common.sh@10 -- # set +x 00:06:50.667 09:53:49 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:50.667 00:06:50.667 real 0m2.815s 00:06:50.667 user 0m2.355s 00:06:50.667 sys 0m0.238s 00:06:50.667 09:53:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.667 09:53:49 -- common/autotest_common.sh@10 -- # set +x 00:06:50.667 ************************************ 00:06:50.667 END TEST thread 00:06:50.667 ************************************ 00:06:50.667 09:53:49 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:50.667 09:53:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:50.667 09:53:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.667 09:53:49 -- common/autotest_common.sh@10 -- # set +x 00:06:50.667 ************************************ 00:06:50.667 START TEST accel 00:06:50.667 ************************************ 00:06:50.667 09:53:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:50.667 * Looking for test storage... 00:06:50.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:50.667 09:53:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:50.667 09:53:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:50.667 09:53:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:50.927 09:53:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:50.927 09:53:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:50.927 09:53:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:50.927 09:53:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:50.927 09:53:49 -- scripts/common.sh@335 -- # IFS=.-: 00:06:50.927 09:53:49 -- scripts/common.sh@335 -- # read -ra ver1 00:06:50.927 09:53:49 -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.927 09:53:49 -- scripts/common.sh@336 -- # read -ra ver2 00:06:50.927 09:53:49 -- scripts/common.sh@337 -- # local 'op=<' 00:06:50.927 09:53:49 -- scripts/common.sh@339 -- # ver1_l=2 00:06:50.927 09:53:49 -- scripts/common.sh@340 -- # ver2_l=1 00:06:50.927 09:53:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:50.927 09:53:49 -- scripts/common.sh@343 -- # case "$op" in 00:06:50.927 09:53:49 -- scripts/common.sh@344 -- # : 1 00:06:50.927 09:53:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:50.927 09:53:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.927 09:53:49 -- scripts/common.sh@364 -- # decimal 1 00:06:50.927 09:53:49 -- scripts/common.sh@352 -- # local d=1 00:06:50.927 09:53:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.927 09:53:49 -- scripts/common.sh@354 -- # echo 1 00:06:50.927 09:53:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:50.927 09:53:49 -- scripts/common.sh@365 -- # decimal 2 00:06:50.927 09:53:49 -- scripts/common.sh@352 -- # local d=2 00:06:50.927 09:53:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.927 09:53:49 -- scripts/common.sh@354 -- # echo 2 00:06:50.927 09:53:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:50.927 09:53:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:50.927 09:53:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:50.927 09:53:49 -- scripts/common.sh@367 -- # return 0 00:06:50.927 09:53:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.927 09:53:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:50.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.927 --rc genhtml_branch_coverage=1 00:06:50.927 --rc genhtml_function_coverage=1 00:06:50.927 --rc genhtml_legend=1 00:06:50.927 --rc geninfo_all_blocks=1 00:06:50.927 --rc geninfo_unexecuted_blocks=1 00:06:50.927 00:06:50.927 ' 00:06:50.927 09:53:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:50.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.927 --rc genhtml_branch_coverage=1 00:06:50.927 --rc genhtml_function_coverage=1 00:06:50.927 --rc genhtml_legend=1 00:06:50.927 --rc geninfo_all_blocks=1 00:06:50.927 --rc geninfo_unexecuted_blocks=1 00:06:50.927 00:06:50.927 ' 00:06:50.927 09:53:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:50.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.927 --rc genhtml_branch_coverage=1 00:06:50.927 --rc genhtml_function_coverage=1 00:06:50.927 --rc genhtml_legend=1 00:06:50.927 --rc geninfo_all_blocks=1 00:06:50.927 --rc geninfo_unexecuted_blocks=1 00:06:50.927 00:06:50.927 ' 00:06:50.927 09:53:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:50.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.927 --rc genhtml_branch_coverage=1 00:06:50.927 --rc genhtml_function_coverage=1 00:06:50.927 --rc genhtml_legend=1 00:06:50.927 --rc geninfo_all_blocks=1 00:06:50.927 --rc geninfo_unexecuted_blocks=1 00:06:50.927 00:06:50.927 ' 00:06:50.927 09:53:49 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:50.927 09:53:49 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:50.927 09:53:49 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:50.927 09:53:49 -- accel/accel.sh@59 -- # spdk_tgt_pid=70334 00:06:50.927 09:53:49 -- accel/accel.sh@60 -- # waitforlisten 70334 00:06:50.927 09:53:49 -- common/autotest_common.sh@829 -- # '[' -z 70334 ']' 00:06:50.927 09:53:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.927 09:53:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.927 09:53:49 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:50.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.927 09:53:49 -- accel/accel.sh@58 -- # build_accel_config 00:06:50.927 09:53:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.927 09:53:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.927 09:53:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.927 09:53:49 -- common/autotest_common.sh@10 -- # set +x 00:06:50.927 09:53:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.927 09:53:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.927 09:53:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.927 09:53:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.927 09:53:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.927 09:53:49 -- accel/accel.sh@42 -- # jq -r . 00:06:50.927 [2024-12-16 09:53:49.377004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.928 [2024-12-16 09:53:49.377113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70334 ] 00:06:50.928 [2024-12-16 09:53:49.513044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.186 [2024-12-16 09:53:49.570765] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.186 [2024-12-16 09:53:49.570948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.123 09:53:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.123 09:53:50 -- common/autotest_common.sh@862 -- # return 0 00:06:52.123 09:53:50 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:52.123 09:53:50 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:52.123 09:53:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.123 09:53:50 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:52.123 09:53:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.123 09:53:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.123 09:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.123 09:53:50 -- accel/accel.sh@64 -- # IFS== 00:06:52.123 09:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.123 09:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.123 09:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.123 09:53:50 -- accel/accel.sh@64 -- # IFS== 00:06:52.123 09:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.123 09:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.123 09:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.123 09:53:50 -- accel/accel.sh@64 -- # IFS== 00:06:52.123 09:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.123 09:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.123 09:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.123 09:53:50 -- accel/accel.sh@64 -- # IFS== 00:06:52.123 09:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.123 09:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.123 09:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.123 09:53:50 -- accel/accel.sh@64 -- # IFS== 00:06:52.123 09:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.123 09:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.123 09:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.123 09:53:50 -- accel/accel.sh@64 -- # IFS== 00:06:52.123 09:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.123 09:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.123 09:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.123 09:53:50 -- accel/accel.sh@64 -- # IFS== 00:06:52.123 09:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.123 09:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.123 09:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.123 09:53:50 -- accel/accel.sh@64 -- # IFS== 00:06:52.124 09:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.124 09:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.124 09:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.124 09:53:50 -- accel/accel.sh@64 -- # IFS== 00:06:52.124 09:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.124 09:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.124 09:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.124 09:53:50 -- accel/accel.sh@64 -- # IFS== 00:06:52.124 09:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.124 09:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.124 09:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.124 09:53:50 -- accel/accel.sh@64 -- # IFS== 00:06:52.124 09:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.124 09:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.124 09:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.124 09:53:50 -- accel/accel.sh@64 -- # IFS== 00:06:52.124 09:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.124 09:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.124 09:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.124 09:53:50 -- accel/accel.sh@64 -- # IFS== 00:06:52.124 09:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.124 09:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.124 09:53:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:52.124 09:53:50 -- accel/accel.sh@64 -- # IFS== 00:06:52.124 09:53:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:52.124 09:53:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:52.124 09:53:50 -- accel/accel.sh@67 -- # killprocess 70334 00:06:52.124 09:53:50 -- common/autotest_common.sh@936 -- # '[' -z 70334 ']' 00:06:52.124 09:53:50 -- common/autotest_common.sh@940 -- # kill -0 70334 00:06:52.124 09:53:50 -- common/autotest_common.sh@941 -- # uname 00:06:52.124 09:53:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:52.124 09:53:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70334 00:06:52.124 09:53:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:52.124 killing process with pid 70334 00:06:52.124 09:53:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:52.124 09:53:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70334' 00:06:52.124 09:53:50 -- common/autotest_common.sh@955 -- # kill 70334 00:06:52.124 09:53:50 -- common/autotest_common.sh@960 -- # wait 70334 00:06:52.387 09:53:50 -- accel/accel.sh@68 -- # trap - ERR 00:06:52.387 09:53:50 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:52.387 09:53:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:52.387 09:53:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.387 09:53:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.387 09:53:50 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:52.387 09:53:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:52.387 09:53:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.387 09:53:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.387 09:53:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.387 09:53:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.387 09:53:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.387 09:53:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.387 09:53:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.387 09:53:50 -- accel/accel.sh@42 -- # jq -r . 00:06:52.387 09:53:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.387 09:53:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.387 09:53:50 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:52.387 09:53:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:52.387 09:53:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.387 09:53:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.387 ************************************ 00:06:52.387 START TEST accel_missing_filename 00:06:52.387 ************************************ 00:06:52.387 09:53:50 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:52.387 09:53:50 -- common/autotest_common.sh@650 -- # local es=0 00:06:52.387 09:53:50 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:52.387 09:53:50 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:52.387 09:53:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.387 09:53:50 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:52.387 09:53:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.387 09:53:50 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:52.387 09:53:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:52.387 09:53:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.387 09:53:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.387 09:53:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.387 09:53:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.387 09:53:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.387 09:53:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.387 09:53:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.387 09:53:50 -- accel/accel.sh@42 -- # jq -r . 00:06:52.387 [2024-12-16 09:53:50.924481] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.387 [2024-12-16 09:53:50.924551] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70409 ] 00:06:52.683 [2024-12-16 09:53:51.055386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.683 [2024-12-16 09:53:51.108136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.683 [2024-12-16 09:53:51.162804] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.683 [2024-12-16 09:53:51.235807] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:52.683 A filename is required. 00:06:52.683 09:53:51 -- common/autotest_common.sh@653 -- # es=234 00:06:52.683 09:53:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.683 09:53:51 -- common/autotest_common.sh@662 -- # es=106 00:06:52.683 09:53:51 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:52.683 09:53:51 -- common/autotest_common.sh@670 -- # es=1 00:06:52.683 09:53:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.683 00:06:52.683 real 0m0.388s 00:06:52.683 user 0m0.231s 00:06:52.683 sys 0m0.105s 00:06:52.683 09:53:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.683 09:53:51 -- common/autotest_common.sh@10 -- # set +x 00:06:52.683 ************************************ 00:06:52.683 END TEST accel_missing_filename 00:06:52.683 ************************************ 00:06:52.959 09:53:51 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.959 09:53:51 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:52.959 09:53:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.959 09:53:51 -- common/autotest_common.sh@10 -- # set +x 00:06:52.959 ************************************ 00:06:52.959 START TEST accel_compress_verify 00:06:52.959 ************************************ 00:06:52.959 09:53:51 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.959 09:53:51 -- common/autotest_common.sh@650 -- # local es=0 00:06:52.959 09:53:51 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.959 09:53:51 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:52.959 09:53:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.959 09:53:51 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:52.959 09:53:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.959 09:53:51 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.959 09:53:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.959 09:53:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.959 09:53:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.959 09:53:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.959 09:53:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.959 09:53:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.959 09:53:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.959 09:53:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.959 09:53:51 -- accel/accel.sh@42 -- # jq -r . 00:06:52.959 [2024-12-16 09:53:51.370283] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.959 [2024-12-16 09:53:51.370401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70428 ] 00:06:52.959 [2024-12-16 09:53:51.508412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.959 [2024-12-16 09:53:51.559933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.218 [2024-12-16 09:53:51.612860] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.218 [2024-12-16 09:53:51.689765] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:53.218 00:06:53.218 Compression does not support the verify option, aborting. 00:06:53.218 09:53:51 -- common/autotest_common.sh@653 -- # es=161 00:06:53.218 09:53:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.218 09:53:51 -- common/autotest_common.sh@662 -- # es=33 00:06:53.218 09:53:51 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:53.218 09:53:51 -- common/autotest_common.sh@670 -- # es=1 00:06:53.218 09:53:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.218 00:06:53.218 real 0m0.406s 00:06:53.218 user 0m0.250s 00:06:53.218 sys 0m0.104s 00:06:53.218 09:53:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.218 ************************************ 00:06:53.218 END TEST accel_compress_verify 00:06:53.218 ************************************ 00:06:53.218 09:53:51 -- common/autotest_common.sh@10 -- # set +x 00:06:53.218 09:53:51 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:53.218 09:53:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:53.218 09:53:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.219 09:53:51 -- common/autotest_common.sh@10 -- # set +x 00:06:53.219 ************************************ 00:06:53.219 START TEST accel_wrong_workload 00:06:53.219 ************************************ 00:06:53.219 09:53:51 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:53.219 09:53:51 -- common/autotest_common.sh@650 -- # local es=0 00:06:53.219 09:53:51 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:53.219 09:53:51 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:53.219 09:53:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.219 09:53:51 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:53.219 09:53:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.219 09:53:51 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:53.219 09:53:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:53.219 09:53:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.219 09:53:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.219 09:53:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.219 09:53:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.219 09:53:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.219 09:53:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.219 09:53:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.219 09:53:51 -- accel/accel.sh@42 -- # jq -r . 00:06:53.219 Unsupported workload type: foobar 00:06:53.219 [2024-12-16 09:53:51.823791] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:53.219 accel_perf options: 00:06:53.219 [-h help message] 00:06:53.219 [-q queue depth per core] 00:06:53.219 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:53.219 [-T number of threads per core 00:06:53.219 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:53.219 [-t time in seconds] 00:06:53.219 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:53.219 [ dif_verify, , dif_generate, dif_generate_copy 00:06:53.219 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:53.219 [-l for compress/decompress workloads, name of uncompressed input file 00:06:53.219 [-S for crc32c workload, use this seed value (default 0) 00:06:53.219 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:53.219 [-f for fill workload, use this BYTE value (default 255) 00:06:53.219 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:53.219 [-y verify result if this switch is on] 00:06:53.219 [-a tasks to allocate per core (default: same value as -q)] 00:06:53.219 Can be used to spread operations across a wider range of memory. 00:06:53.219 09:53:51 -- common/autotest_common.sh@653 -- # es=1 00:06:53.219 09:53:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.219 09:53:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:53.219 09:53:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.219 00:06:53.219 real 0m0.026s 00:06:53.219 user 0m0.012s 00:06:53.219 sys 0m0.014s 00:06:53.219 09:53:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.219 09:53:51 -- common/autotest_common.sh@10 -- # set +x 00:06:53.219 ************************************ 00:06:53.219 END TEST accel_wrong_workload 00:06:53.219 ************************************ 00:06:53.478 09:53:51 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:53.478 09:53:51 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:53.478 09:53:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.478 09:53:51 -- common/autotest_common.sh@10 -- # set +x 00:06:53.478 ************************************ 00:06:53.478 START TEST accel_negative_buffers 00:06:53.478 ************************************ 00:06:53.478 09:53:51 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:53.478 09:53:51 -- common/autotest_common.sh@650 -- # local es=0 00:06:53.478 09:53:51 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:53.478 09:53:51 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:53.478 09:53:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.478 09:53:51 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:53.478 09:53:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.478 09:53:51 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:53.478 09:53:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:53.478 09:53:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.478 09:53:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.478 09:53:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.478 09:53:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.478 09:53:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.478 09:53:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.478 09:53:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.478 09:53:51 -- accel/accel.sh@42 -- # jq -r . 00:06:53.478 -x option must be non-negative. 00:06:53.478 [2024-12-16 09:53:51.901551] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:53.478 accel_perf options: 00:06:53.478 [-h help message] 00:06:53.478 [-q queue depth per core] 00:06:53.478 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:53.478 [-T number of threads per core 00:06:53.478 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:53.478 [-t time in seconds] 00:06:53.478 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:53.478 [ dif_verify, , dif_generate, dif_generate_copy 00:06:53.478 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:53.478 [-l for compress/decompress workloads, name of uncompressed input file 00:06:53.478 [-S for crc32c workload, use this seed value (default 0) 00:06:53.478 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:53.478 [-f for fill workload, use this BYTE value (default 255) 00:06:53.478 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:53.478 [-y verify result if this switch is on] 00:06:53.478 [-a tasks to allocate per core (default: same value as -q)] 00:06:53.478 Can be used to spread operations across a wider range of memory. 00:06:53.478 09:53:51 -- common/autotest_common.sh@653 -- # es=1 00:06:53.478 09:53:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.478 09:53:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:53.478 09:53:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.478 00:06:53.478 real 0m0.031s 00:06:53.478 user 0m0.014s 00:06:53.478 sys 0m0.017s 00:06:53.478 09:53:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.478 09:53:51 -- common/autotest_common.sh@10 -- # set +x 00:06:53.478 ************************************ 00:06:53.478 END TEST accel_negative_buffers 00:06:53.478 ************************************ 00:06:53.478 09:53:51 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:53.478 09:53:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:53.478 09:53:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.478 09:53:51 -- common/autotest_common.sh@10 -- # set +x 00:06:53.478 ************************************ 00:06:53.478 START TEST accel_crc32c 00:06:53.479 ************************************ 00:06:53.479 09:53:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:53.479 09:53:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.479 09:53:51 -- accel/accel.sh@17 -- # local accel_module 00:06:53.479 09:53:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:53.479 09:53:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:53.479 09:53:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.479 09:53:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.479 09:53:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.479 09:53:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.479 09:53:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.479 09:53:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.479 09:53:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.479 09:53:51 -- accel/accel.sh@42 -- # jq -r . 00:06:53.479 [2024-12-16 09:53:51.987721] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.479 [2024-12-16 09:53:51.987808] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70492 ] 00:06:53.737 [2024-12-16 09:53:52.125172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.737 [2024-12-16 09:53:52.178399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.115 09:53:53 -- accel/accel.sh@18 -- # out=' 00:06:55.115 SPDK Configuration: 00:06:55.115 Core mask: 0x1 00:06:55.115 00:06:55.115 Accel Perf Configuration: 00:06:55.115 Workload Type: crc32c 00:06:55.115 CRC-32C seed: 32 00:06:55.115 Transfer size: 4096 bytes 00:06:55.115 Vector count 1 00:06:55.115 Module: software 00:06:55.115 Queue depth: 32 00:06:55.115 Allocate depth: 32 00:06:55.115 # threads/core: 1 00:06:55.115 Run time: 1 seconds 00:06:55.115 Verify: Yes 00:06:55.115 00:06:55.115 Running for 1 seconds... 00:06:55.115 00:06:55.115 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.115 ------------------------------------------------------------------------------------ 00:06:55.115 0,0 559168/s 2184 MiB/s 0 0 00:06:55.115 ==================================================================================== 00:06:55.115 Total 559168/s 2184 MiB/s 0 0' 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:55.115 09:53:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.115 09:53:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.115 09:53:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.115 09:53:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.115 09:53:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.115 09:53:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.115 09:53:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.115 09:53:53 -- accel/accel.sh@42 -- # jq -r . 00:06:55.115 [2024-12-16 09:53:53.378215] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.115 [2024-12-16 09:53:53.378306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70506 ] 00:06:55.115 [2024-12-16 09:53:53.509169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.115 [2024-12-16 09:53:53.561146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val= 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val= 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val=0x1 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val= 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val= 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val=crc32c 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val=32 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val= 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val=software 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val=32 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val=32 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val=1 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val=Yes 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val= 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:55.115 09:53:53 -- accel/accel.sh@21 -- # val= 00:06:55.115 09:53:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # IFS=: 00:06:55.115 09:53:53 -- accel/accel.sh@20 -- # read -r var val 00:06:56.492 09:53:54 -- accel/accel.sh@21 -- # val= 00:06:56.492 09:53:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.492 09:53:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.492 09:53:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.492 09:53:54 -- accel/accel.sh@21 -- # val= 00:06:56.492 09:53:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.492 09:53:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.492 09:53:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.492 09:53:54 -- accel/accel.sh@21 -- # val= 00:06:56.492 09:53:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.492 09:53:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.492 09:53:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.492 09:53:54 -- accel/accel.sh@21 -- # val= 00:06:56.492 09:53:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.492 09:53:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.492 09:53:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.492 09:53:54 -- accel/accel.sh@21 -- # val= 00:06:56.492 09:53:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.492 09:53:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.492 09:53:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.492 09:53:54 -- accel/accel.sh@21 -- # val= 00:06:56.492 09:53:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.492 09:53:54 -- accel/accel.sh@20 -- # IFS=: 00:06:56.492 09:53:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.492 09:53:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.492 09:53:54 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:56.492 09:53:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.492 00:06:56.492 real 0m2.787s 00:06:56.492 user 0m1.210s 00:06:56.492 sys 0m0.102s 00:06:56.492 09:53:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.492 ************************************ 00:06:56.492 END TEST accel_crc32c 00:06:56.492 ************************************ 00:06:56.492 09:53:54 -- common/autotest_common.sh@10 -- # set +x 00:06:56.492 09:53:54 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:56.492 09:53:54 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:56.492 09:53:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.492 09:53:54 -- common/autotest_common.sh@10 -- # set +x 00:06:56.492 ************************************ 00:06:56.492 START TEST accel_crc32c_C2 00:06:56.492 ************************************ 00:06:56.492 09:53:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:56.492 09:53:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.492 09:53:54 -- accel/accel.sh@17 -- # local accel_module 00:06:56.492 09:53:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:56.492 09:53:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:56.492 09:53:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.492 09:53:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.492 09:53:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.492 09:53:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.492 09:53:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.492 09:53:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.492 09:53:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.492 09:53:54 -- accel/accel.sh@42 -- # jq -r . 00:06:56.492 [2024-12-16 09:53:54.823647] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.492 [2024-12-16 09:53:54.823756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70542 ] 00:06:56.492 [2024-12-16 09:53:54.961655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.492 [2024-12-16 09:53:55.015266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.868 09:53:56 -- accel/accel.sh@18 -- # out=' 00:06:57.868 SPDK Configuration: 00:06:57.868 Core mask: 0x1 00:06:57.868 00:06:57.868 Accel Perf Configuration: 00:06:57.868 Workload Type: crc32c 00:06:57.868 CRC-32C seed: 0 00:06:57.868 Transfer size: 4096 bytes 00:06:57.868 Vector count 2 00:06:57.868 Module: software 00:06:57.868 Queue depth: 32 00:06:57.868 Allocate depth: 32 00:06:57.868 # threads/core: 1 00:06:57.868 Run time: 1 seconds 00:06:57.868 Verify: Yes 00:06:57.868 00:06:57.868 Running for 1 seconds... 00:06:57.868 00:06:57.868 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.868 ------------------------------------------------------------------------------------ 00:06:57.868 0,0 427584/s 3340 MiB/s 0 0 00:06:57.868 ==================================================================================== 00:06:57.868 Total 427584/s 1670 MiB/s 0 0' 00:06:57.868 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:57.868 09:53:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:57.868 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:57.868 09:53:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:57.868 09:53:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.868 09:53:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.868 09:53:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.868 09:53:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.868 09:53:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.868 09:53:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.868 09:53:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.868 09:53:56 -- accel/accel.sh@42 -- # jq -r . 00:06:57.868 [2024-12-16 09:53:56.242760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.868 [2024-12-16 09:53:56.243740] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70560 ] 00:06:57.868 [2024-12-16 09:53:56.381031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.868 [2024-12-16 09:53:56.430689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.868 09:53:56 -- accel/accel.sh@21 -- # val= 00:06:57.868 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.868 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:57.868 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:57.868 09:53:56 -- accel/accel.sh@21 -- # val= 00:06:57.868 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.868 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:57.868 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:57.868 09:53:56 -- accel/accel.sh@21 -- # val=0x1 00:06:57.868 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.868 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:57.868 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:57.868 09:53:56 -- accel/accel.sh@21 -- # val= 00:06:57.868 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.868 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:57.868 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:57.868 09:53:56 -- accel/accel.sh@21 -- # val= 00:06:57.868 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.868 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:57.868 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 09:53:56 -- accel/accel.sh@21 -- # val=crc32c 00:06:58.127 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 09:53:56 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 09:53:56 -- accel/accel.sh@21 -- # val=0 00:06:58.127 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 09:53:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.127 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 09:53:56 -- accel/accel.sh@21 -- # val= 00:06:58.127 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 09:53:56 -- accel/accel.sh@21 -- # val=software 00:06:58.127 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 09:53:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 09:53:56 -- accel/accel.sh@21 -- # val=32 00:06:58.127 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 09:53:56 -- accel/accel.sh@21 -- # val=32 00:06:58.127 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 09:53:56 -- accel/accel.sh@21 -- # val=1 00:06:58.127 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 09:53:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.127 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 09:53:56 -- accel/accel.sh@21 -- # val=Yes 00:06:58.127 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 09:53:56 -- accel/accel.sh@21 -- # val= 00:06:58.127 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.127 09:53:56 -- accel/accel.sh@21 -- # val= 00:06:58.127 09:53:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.127 09:53:56 -- accel/accel.sh@20 -- # read -r var val 00:06:59.063 09:53:57 -- accel/accel.sh@21 -- # val= 00:06:59.063 09:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.063 09:53:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.063 09:53:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.063 09:53:57 -- accel/accel.sh@21 -- # val= 00:06:59.063 09:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.063 09:53:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.063 09:53:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.063 09:53:57 -- accel/accel.sh@21 -- # val= 00:06:59.063 09:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.063 09:53:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.063 09:53:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.063 09:53:57 -- accel/accel.sh@21 -- # val= 00:06:59.063 09:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.063 09:53:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.063 09:53:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.063 09:53:57 -- accel/accel.sh@21 -- # val= 00:06:59.063 09:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.063 09:53:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.063 09:53:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.063 09:53:57 -- accel/accel.sh@21 -- # val= 00:06:59.063 09:53:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.063 09:53:57 -- accel/accel.sh@20 -- # IFS=: 00:06:59.063 09:53:57 -- accel/accel.sh@20 -- # read -r var val 00:06:59.063 09:53:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.063 09:53:57 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:59.063 09:53:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.063 00:06:59.063 real 0m2.826s 00:06:59.063 user 0m2.400s 00:06:59.063 sys 0m0.217s 00:06:59.063 09:53:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.063 ************************************ 00:06:59.063 END TEST accel_crc32c_C2 00:06:59.063 ************************************ 00:06:59.063 09:53:57 -- common/autotest_common.sh@10 -- # set +x 00:06:59.063 09:53:57 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:59.063 09:53:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:59.063 09:53:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.063 09:53:57 -- common/autotest_common.sh@10 -- # set +x 00:06:59.063 ************************************ 00:06:59.063 START TEST accel_copy 00:06:59.063 ************************************ 00:06:59.063 09:53:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:59.063 09:53:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.063 09:53:57 -- accel/accel.sh@17 -- # local accel_module 00:06:59.063 09:53:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:59.063 09:53:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:59.063 09:53:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.063 09:53:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.063 09:53:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.063 09:53:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.063 09:53:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.063 09:53:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.063 09:53:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.063 09:53:57 -- accel/accel.sh@42 -- # jq -r . 00:06:59.322 [2024-12-16 09:53:57.701536] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.322 [2024-12-16 09:53:57.701646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70589 ] 00:06:59.322 [2024-12-16 09:53:57.838212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.322 [2024-12-16 09:53:57.890783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.699 09:53:59 -- accel/accel.sh@18 -- # out=' 00:07:00.699 SPDK Configuration: 00:07:00.699 Core mask: 0x1 00:07:00.699 00:07:00.699 Accel Perf Configuration: 00:07:00.699 Workload Type: copy 00:07:00.699 Transfer size: 4096 bytes 00:07:00.699 Vector count 1 00:07:00.699 Module: software 00:07:00.699 Queue depth: 32 00:07:00.699 Allocate depth: 32 00:07:00.699 # threads/core: 1 00:07:00.699 Run time: 1 seconds 00:07:00.699 Verify: Yes 00:07:00.699 00:07:00.699 Running for 1 seconds... 00:07:00.699 00:07:00.699 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.699 ------------------------------------------------------------------------------------ 00:07:00.699 0,0 388672/s 1518 MiB/s 0 0 00:07:00.699 ==================================================================================== 00:07:00.699 Total 388672/s 1518 MiB/s 0 0' 00:07:00.699 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.699 09:53:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:00.699 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.699 09:53:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:00.699 09:53:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.699 09:53:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.699 09:53:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.699 09:53:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.699 09:53:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.699 09:53:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.699 09:53:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.699 09:53:59 -- accel/accel.sh@42 -- # jq -r . 00:07:00.699 [2024-12-16 09:53:59.106630] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.699 [2024-12-16 09:53:59.107865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70614 ] 00:07:00.699 [2024-12-16 09:53:59.242255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.699 [2024-12-16 09:53:59.291808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val= 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val= 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val=0x1 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val= 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val= 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val=copy 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val= 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val=software 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val=32 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val=32 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val=1 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val=Yes 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val= 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.958 09:53:59 -- accel/accel.sh@21 -- # val= 00:07:00.958 09:53:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.958 09:53:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.893 09:54:00 -- accel/accel.sh@21 -- # val= 00:07:01.893 09:54:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.893 09:54:00 -- accel/accel.sh@20 -- # IFS=: 00:07:01.893 09:54:00 -- accel/accel.sh@20 -- # read -r var val 00:07:01.893 09:54:00 -- accel/accel.sh@21 -- # val= 00:07:01.893 09:54:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.893 09:54:00 -- accel/accel.sh@20 -- # IFS=: 00:07:01.893 09:54:00 -- accel/accel.sh@20 -- # read -r var val 00:07:01.893 09:54:00 -- accel/accel.sh@21 -- # val= 00:07:01.893 09:54:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.893 09:54:00 -- accel/accel.sh@20 -- # IFS=: 00:07:01.894 09:54:00 -- accel/accel.sh@20 -- # read -r var val 00:07:01.894 09:54:00 -- accel/accel.sh@21 -- # val= 00:07:01.894 09:54:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.894 09:54:00 -- accel/accel.sh@20 -- # IFS=: 00:07:01.894 09:54:00 -- accel/accel.sh@20 -- # read -r var val 00:07:01.894 09:54:00 -- accel/accel.sh@21 -- # val= 00:07:01.894 09:54:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.894 09:54:00 -- accel/accel.sh@20 -- # IFS=: 00:07:01.894 09:54:00 -- accel/accel.sh@20 -- # read -r var val 00:07:01.894 09:54:00 -- accel/accel.sh@21 -- # val= 00:07:01.894 09:54:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.894 09:54:00 -- accel/accel.sh@20 -- # IFS=: 00:07:01.894 09:54:00 -- accel/accel.sh@20 -- # read -r var val 00:07:01.894 09:54:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.894 09:54:00 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:01.894 09:54:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.894 00:07:01.894 real 0m2.811s 00:07:01.894 user 0m2.389s 00:07:01.894 sys 0m0.213s 00:07:01.894 09:54:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.894 ************************************ 00:07:01.894 END TEST accel_copy 00:07:01.894 ************************************ 00:07:01.894 09:54:00 -- common/autotest_common.sh@10 -- # set +x 00:07:02.152 09:54:00 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.152 09:54:00 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:02.152 09:54:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.152 09:54:00 -- common/autotest_common.sh@10 -- # set +x 00:07:02.152 ************************************ 00:07:02.152 START TEST accel_fill 00:07:02.152 ************************************ 00:07:02.152 09:54:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.152 09:54:00 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.152 09:54:00 -- accel/accel.sh@17 -- # local accel_module 00:07:02.152 09:54:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.152 09:54:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.152 09:54:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.152 09:54:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.152 09:54:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.152 09:54:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.152 09:54:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.152 09:54:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.152 09:54:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.152 09:54:00 -- accel/accel.sh@42 -- # jq -r . 00:07:02.152 [2024-12-16 09:54:00.566443] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.152 [2024-12-16 09:54:00.567329] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70643 ] 00:07:02.152 [2024-12-16 09:54:00.696503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.152 [2024-12-16 09:54:00.748311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.529 09:54:01 -- accel/accel.sh@18 -- # out=' 00:07:03.529 SPDK Configuration: 00:07:03.529 Core mask: 0x1 00:07:03.529 00:07:03.529 Accel Perf Configuration: 00:07:03.529 Workload Type: fill 00:07:03.529 Fill pattern: 0x80 00:07:03.529 Transfer size: 4096 bytes 00:07:03.529 Vector count 1 00:07:03.529 Module: software 00:07:03.529 Queue depth: 64 00:07:03.529 Allocate depth: 64 00:07:03.529 # threads/core: 1 00:07:03.529 Run time: 1 seconds 00:07:03.529 Verify: Yes 00:07:03.529 00:07:03.529 Running for 1 seconds... 00:07:03.529 00:07:03.529 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.529 ------------------------------------------------------------------------------------ 00:07:03.529 0,0 555200/s 2168 MiB/s 0 0 00:07:03.529 ==================================================================================== 00:07:03.529 Total 555200/s 2168 MiB/s 0 0' 00:07:03.529 09:54:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.529 09:54:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.529 09:54:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.529 09:54:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.529 09:54:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.529 09:54:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.529 09:54:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.529 09:54:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.529 09:54:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.529 09:54:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.529 09:54:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.529 09:54:01 -- accel/accel.sh@42 -- # jq -r . 00:07:03.529 [2024-12-16 09:54:01.981466] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.529 [2024-12-16 09:54:01.981560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70665 ] 00:07:03.529 [2024-12-16 09:54:02.114292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.790 [2024-12-16 09:54:02.177506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val= 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val= 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val=0x1 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val= 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val= 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val=fill 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val=0x80 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val= 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val=software 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val=64 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val=64 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val=1 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val=Yes 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val= 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.790 09:54:02 -- accel/accel.sh@21 -- # val= 00:07:03.790 09:54:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.790 09:54:02 -- accel/accel.sh@20 -- # read -r var val 00:07:05.167 09:54:03 -- accel/accel.sh@21 -- # val= 00:07:05.167 09:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.167 09:54:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.167 09:54:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.167 09:54:03 -- accel/accel.sh@21 -- # val= 00:07:05.167 09:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.167 09:54:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.167 09:54:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.167 09:54:03 -- accel/accel.sh@21 -- # val= 00:07:05.167 09:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.167 09:54:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.167 09:54:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.167 09:54:03 -- accel/accel.sh@21 -- # val= 00:07:05.167 09:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.167 09:54:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.167 09:54:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.167 09:54:03 -- accel/accel.sh@21 -- # val= 00:07:05.167 09:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.167 09:54:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.167 09:54:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.167 09:54:03 -- accel/accel.sh@21 -- # val= 00:07:05.167 09:54:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.167 09:54:03 -- accel/accel.sh@20 -- # IFS=: 00:07:05.167 09:54:03 -- accel/accel.sh@20 -- # read -r var val 00:07:05.167 09:54:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.167 09:54:03 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:05.167 09:54:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.167 00:07:05.167 real 0m2.826s 00:07:05.167 user 0m2.414s 00:07:05.167 sys 0m0.210s 00:07:05.167 09:54:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.167 09:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:05.167 ************************************ 00:07:05.167 END TEST accel_fill 00:07:05.167 ************************************ 00:07:05.167 09:54:03 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:05.167 09:54:03 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:05.167 09:54:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.167 09:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:05.167 ************************************ 00:07:05.167 START TEST accel_copy_crc32c 00:07:05.167 ************************************ 00:07:05.167 09:54:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:05.167 09:54:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.167 09:54:03 -- accel/accel.sh@17 -- # local accel_module 00:07:05.167 09:54:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:05.167 09:54:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:05.167 09:54:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.167 09:54:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.167 09:54:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.167 09:54:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.167 09:54:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.167 09:54:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.167 09:54:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.167 09:54:03 -- accel/accel.sh@42 -- # jq -r . 00:07:05.168 [2024-12-16 09:54:03.451589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.168 [2024-12-16 09:54:03.451697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70699 ] 00:07:05.168 [2024-12-16 09:54:03.589746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.168 [2024-12-16 09:54:03.642098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.553 09:54:04 -- accel/accel.sh@18 -- # out=' 00:07:06.553 SPDK Configuration: 00:07:06.553 Core mask: 0x1 00:07:06.553 00:07:06.553 Accel Perf Configuration: 00:07:06.553 Workload Type: copy_crc32c 00:07:06.553 CRC-32C seed: 0 00:07:06.553 Vector size: 4096 bytes 00:07:06.553 Transfer size: 4096 bytes 00:07:06.553 Vector count 1 00:07:06.553 Module: software 00:07:06.553 Queue depth: 32 00:07:06.553 Allocate depth: 32 00:07:06.553 # threads/core: 1 00:07:06.553 Run time: 1 seconds 00:07:06.553 Verify: Yes 00:07:06.553 00:07:06.553 Running for 1 seconds... 00:07:06.553 00:07:06.553 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.553 ------------------------------------------------------------------------------------ 00:07:06.553 0,0 307744/s 1202 MiB/s 0 0 00:07:06.553 ==================================================================================== 00:07:06.553 Total 307744/s 1202 MiB/s 0 0' 00:07:06.553 09:54:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:06.553 09:54:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:06.553 09:54:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.553 09:54:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.553 09:54:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.553 09:54:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.553 09:54:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.553 09:54:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.553 09:54:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.553 09:54:04 -- accel/accel.sh@42 -- # jq -r . 00:07:06.553 [2024-12-16 09:54:04.853656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.553 [2024-12-16 09:54:04.854570] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70713 ] 00:07:06.553 [2024-12-16 09:54:04.995799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.553 [2024-12-16 09:54:05.047220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val= 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val= 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val=0x1 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val= 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val= 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val=0 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val= 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val=software 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val=32 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val=32 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val=1 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val=Yes 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val= 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 09:54:05 -- accel/accel.sh@21 -- # val= 00:07:06.553 09:54:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 09:54:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.930 09:54:06 -- accel/accel.sh@21 -- # val= 00:07:07.930 09:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.930 09:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:07.930 09:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:07.930 09:54:06 -- accel/accel.sh@21 -- # val= 00:07:07.930 09:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.930 09:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:07.930 09:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:07.930 09:54:06 -- accel/accel.sh@21 -- # val= 00:07:07.930 09:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.930 09:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:07.930 09:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:07.930 09:54:06 -- accel/accel.sh@21 -- # val= 00:07:07.930 09:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.930 09:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:07.930 09:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:07.930 09:54:06 -- accel/accel.sh@21 -- # val= 00:07:07.930 09:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.930 09:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:07.930 09:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:07.930 09:54:06 -- accel/accel.sh@21 -- # val= 00:07:07.930 09:54:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.930 09:54:06 -- accel/accel.sh@20 -- # IFS=: 00:07:07.930 09:54:06 -- accel/accel.sh@20 -- # read -r var val 00:07:07.930 09:54:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.930 09:54:06 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:07.930 09:54:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.930 00:07:07.930 real 0m2.815s 00:07:07.930 user 0m2.382s 00:07:07.930 sys 0m0.232s 00:07:07.930 09:54:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.930 09:54:06 -- common/autotest_common.sh@10 -- # set +x 00:07:07.930 ************************************ 00:07:07.930 END TEST accel_copy_crc32c 00:07:07.930 ************************************ 00:07:07.930 09:54:06 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:07.930 09:54:06 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:07.930 09:54:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.930 09:54:06 -- common/autotest_common.sh@10 -- # set +x 00:07:07.930 ************************************ 00:07:07.930 START TEST accel_copy_crc32c_C2 00:07:07.930 ************************************ 00:07:07.930 09:54:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:07.930 09:54:06 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.930 09:54:06 -- accel/accel.sh@17 -- # local accel_module 00:07:07.930 09:54:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:07.930 09:54:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:07.930 09:54:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.930 09:54:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.930 09:54:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.930 09:54:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.930 09:54:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.930 09:54:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.930 09:54:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.930 09:54:06 -- accel/accel.sh@42 -- # jq -r . 00:07:07.930 [2024-12-16 09:54:06.315304] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.930 [2024-12-16 09:54:06.315866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70748 ] 00:07:07.930 [2024-12-16 09:54:06.453399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.930 [2024-12-16 09:54:06.509634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.307 09:54:07 -- accel/accel.sh@18 -- # out=' 00:07:09.307 SPDK Configuration: 00:07:09.307 Core mask: 0x1 00:07:09.307 00:07:09.307 Accel Perf Configuration: 00:07:09.307 Workload Type: copy_crc32c 00:07:09.307 CRC-32C seed: 0 00:07:09.307 Vector size: 4096 bytes 00:07:09.307 Transfer size: 8192 bytes 00:07:09.307 Vector count 2 00:07:09.307 Module: software 00:07:09.307 Queue depth: 32 00:07:09.307 Allocate depth: 32 00:07:09.307 # threads/core: 1 00:07:09.307 Run time: 1 seconds 00:07:09.307 Verify: Yes 00:07:09.307 00:07:09.307 Running for 1 seconds... 00:07:09.307 00:07:09.307 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.307 ------------------------------------------------------------------------------------ 00:07:09.307 0,0 220576/s 1723 MiB/s 0 0 00:07:09.307 ==================================================================================== 00:07:09.307 Total 220576/s 861 MiB/s 0 0' 00:07:09.307 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.307 09:54:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:09.307 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.307 09:54:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:09.307 09:54:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.307 09:54:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.307 09:54:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.307 09:54:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.307 09:54:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.307 09:54:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.307 09:54:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.307 09:54:07 -- accel/accel.sh@42 -- # jq -r . 00:07:09.307 [2024-12-16 09:54:07.723424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.307 [2024-12-16 09:54:07.723519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70767 ] 00:07:09.307 [2024-12-16 09:54:07.859226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.307 [2024-12-16 09:54:07.910609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val= 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val= 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val=0x1 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val= 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val= 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val=0 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val= 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val=software 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val=32 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val=32 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val=1 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val=Yes 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val= 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.566 09:54:07 -- accel/accel.sh@21 -- # val= 00:07:09.566 09:54:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.566 09:54:07 -- accel/accel.sh@20 -- # read -r var val 00:07:10.502 09:54:09 -- accel/accel.sh@21 -- # val= 00:07:10.502 09:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.502 09:54:09 -- accel/accel.sh@20 -- # IFS=: 00:07:10.502 09:54:09 -- accel/accel.sh@20 -- # read -r var val 00:07:10.502 09:54:09 -- accel/accel.sh@21 -- # val= 00:07:10.502 09:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.502 09:54:09 -- accel/accel.sh@20 -- # IFS=: 00:07:10.502 09:54:09 -- accel/accel.sh@20 -- # read -r var val 00:07:10.502 09:54:09 -- accel/accel.sh@21 -- # val= 00:07:10.502 09:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.502 09:54:09 -- accel/accel.sh@20 -- # IFS=: 00:07:10.502 09:54:09 -- accel/accel.sh@20 -- # read -r var val 00:07:10.502 09:54:09 -- accel/accel.sh@21 -- # val= 00:07:10.502 09:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.502 09:54:09 -- accel/accel.sh@20 -- # IFS=: 00:07:10.502 09:54:09 -- accel/accel.sh@20 -- # read -r var val 00:07:10.502 09:54:09 -- accel/accel.sh@21 -- # val= 00:07:10.502 09:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.502 09:54:09 -- accel/accel.sh@20 -- # IFS=: 00:07:10.502 09:54:09 -- accel/accel.sh@20 -- # read -r var val 00:07:10.502 09:54:09 -- accel/accel.sh@21 -- # val= 00:07:10.502 09:54:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.502 09:54:09 -- accel/accel.sh@20 -- # IFS=: 00:07:10.502 09:54:09 -- accel/accel.sh@20 -- # read -r var val 00:07:10.502 09:54:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.502 09:54:09 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:10.502 09:54:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.502 00:07:10.502 real 0m2.812s 00:07:10.502 user 0m2.396s 00:07:10.502 sys 0m0.216s 00:07:10.502 09:54:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.502 09:54:09 -- common/autotest_common.sh@10 -- # set +x 00:07:10.502 ************************************ 00:07:10.502 END TEST accel_copy_crc32c_C2 00:07:10.502 ************************************ 00:07:10.761 09:54:09 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:10.761 09:54:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:10.761 09:54:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.761 09:54:09 -- common/autotest_common.sh@10 -- # set +x 00:07:10.761 ************************************ 00:07:10.761 START TEST accel_dualcast 00:07:10.761 ************************************ 00:07:10.761 09:54:09 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:10.761 09:54:09 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.761 09:54:09 -- accel/accel.sh@17 -- # local accel_module 00:07:10.761 09:54:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:10.761 09:54:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:10.761 09:54:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.761 09:54:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.761 09:54:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.761 09:54:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.761 09:54:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.761 09:54:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.761 09:54:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.761 09:54:09 -- accel/accel.sh@42 -- # jq -r . 00:07:10.761 [2024-12-16 09:54:09.182003] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.761 [2024-12-16 09:54:09.182090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70802 ] 00:07:10.761 [2024-12-16 09:54:09.310802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.761 [2024-12-16 09:54:09.363073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.137 09:54:10 -- accel/accel.sh@18 -- # out=' 00:07:12.137 SPDK Configuration: 00:07:12.137 Core mask: 0x1 00:07:12.137 00:07:12.137 Accel Perf Configuration: 00:07:12.137 Workload Type: dualcast 00:07:12.137 Transfer size: 4096 bytes 00:07:12.137 Vector count 1 00:07:12.137 Module: software 00:07:12.137 Queue depth: 32 00:07:12.137 Allocate depth: 32 00:07:12.137 # threads/core: 1 00:07:12.137 Run time: 1 seconds 00:07:12.137 Verify: Yes 00:07:12.137 00:07:12.137 Running for 1 seconds... 00:07:12.137 00:07:12.137 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.137 ------------------------------------------------------------------------------------ 00:07:12.137 0,0 419424/s 1638 MiB/s 0 0 00:07:12.137 ==================================================================================== 00:07:12.137 Total 419424/s 1638 MiB/s 0 0' 00:07:12.137 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.137 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.137 09:54:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:12.137 09:54:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:12.137 09:54:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.137 09:54:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.137 09:54:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.137 09:54:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.137 09:54:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.137 09:54:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.137 09:54:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.137 09:54:10 -- accel/accel.sh@42 -- # jq -r . 00:07:12.137 [2024-12-16 09:54:10.570110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.137 [2024-12-16 09:54:10.570177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70821 ] 00:07:12.137 [2024-12-16 09:54:10.698992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.137 [2024-12-16 09:54:10.750212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.396 09:54:10 -- accel/accel.sh@21 -- # val= 00:07:12.396 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 09:54:10 -- accel/accel.sh@21 -- # val= 00:07:12.396 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 09:54:10 -- accel/accel.sh@21 -- # val=0x1 00:07:12.396 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 09:54:10 -- accel/accel.sh@21 -- # val= 00:07:12.396 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 09:54:10 -- accel/accel.sh@21 -- # val= 00:07:12.396 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 09:54:10 -- accel/accel.sh@21 -- # val=dualcast 00:07:12.396 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 09:54:10 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 09:54:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.396 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 09:54:10 -- accel/accel.sh@21 -- # val= 00:07:12.396 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.396 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.396 09:54:10 -- accel/accel.sh@21 -- # val=software 00:07:12.396 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.396 09:54:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.397 09:54:10 -- accel/accel.sh@21 -- # val=32 00:07:12.397 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.397 09:54:10 -- accel/accel.sh@21 -- # val=32 00:07:12.397 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.397 09:54:10 -- accel/accel.sh@21 -- # val=1 00:07:12.397 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.397 09:54:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.397 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.397 09:54:10 -- accel/accel.sh@21 -- # val=Yes 00:07:12.397 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.397 09:54:10 -- accel/accel.sh@21 -- # val= 00:07:12.397 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.397 09:54:10 -- accel/accel.sh@21 -- # val= 00:07:12.397 09:54:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.397 09:54:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.332 09:54:11 -- accel/accel.sh@21 -- # val= 00:07:13.332 09:54:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.332 09:54:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.332 09:54:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.332 09:54:11 -- accel/accel.sh@21 -- # val= 00:07:13.332 09:54:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.332 09:54:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.332 09:54:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.332 09:54:11 -- accel/accel.sh@21 -- # val= 00:07:13.332 09:54:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.332 09:54:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.332 09:54:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.332 09:54:11 -- accel/accel.sh@21 -- # val= 00:07:13.332 09:54:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.332 09:54:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.332 09:54:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.332 09:54:11 -- accel/accel.sh@21 -- # val= 00:07:13.591 09:54:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.591 09:54:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.591 09:54:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.591 09:54:11 -- accel/accel.sh@21 -- # val= 00:07:13.591 09:54:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.591 09:54:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.591 09:54:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.591 09:54:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.591 09:54:11 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:13.591 09:54:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.591 00:07:13.591 real 0m2.799s 00:07:13.591 user 0m2.389s 00:07:13.591 sys 0m0.208s 00:07:13.591 ************************************ 00:07:13.591 END TEST accel_dualcast 00:07:13.591 ************************************ 00:07:13.591 09:54:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.591 09:54:11 -- common/autotest_common.sh@10 -- # set +x 00:07:13.591 09:54:11 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:13.591 09:54:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:13.591 09:54:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.591 09:54:11 -- common/autotest_common.sh@10 -- # set +x 00:07:13.591 ************************************ 00:07:13.591 START TEST accel_compare 00:07:13.591 ************************************ 00:07:13.591 09:54:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:13.591 09:54:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.591 09:54:12 -- accel/accel.sh@17 -- # local accel_module 00:07:13.591 09:54:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:13.591 09:54:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:13.591 09:54:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.591 09:54:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.591 09:54:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.591 09:54:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.591 09:54:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.591 09:54:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.591 09:54:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.591 09:54:12 -- accel/accel.sh@42 -- # jq -r . 00:07:13.591 [2024-12-16 09:54:12.035793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.591 [2024-12-16 09:54:12.035889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70850 ] 00:07:13.591 [2024-12-16 09:54:12.167860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.850 [2024-12-16 09:54:12.220063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.785 09:54:13 -- accel/accel.sh@18 -- # out=' 00:07:14.785 SPDK Configuration: 00:07:14.785 Core mask: 0x1 00:07:14.785 00:07:14.785 Accel Perf Configuration: 00:07:14.785 Workload Type: compare 00:07:14.785 Transfer size: 4096 bytes 00:07:14.785 Vector count 1 00:07:14.785 Module: software 00:07:14.785 Queue depth: 32 00:07:14.785 Allocate depth: 32 00:07:14.785 # threads/core: 1 00:07:14.785 Run time: 1 seconds 00:07:14.785 Verify: Yes 00:07:14.785 00:07:14.785 Running for 1 seconds... 00:07:14.785 00:07:14.785 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.785 ------------------------------------------------------------------------------------ 00:07:14.785 0,0 554912/s 2167 MiB/s 0 0 00:07:14.785 ==================================================================================== 00:07:14.785 Total 554912/s 2167 MiB/s 0 0' 00:07:14.785 09:54:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:14.785 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:14.785 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.063 09:54:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:15.063 09:54:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.063 09:54:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.063 09:54:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.063 09:54:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.063 09:54:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.063 09:54:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.063 09:54:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.063 09:54:13 -- accel/accel.sh@42 -- # jq -r . 00:07:15.063 [2024-12-16 09:54:13.432924] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.063 [2024-12-16 09:54:13.433022] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70870 ] 00:07:15.063 [2024-12-16 09:54:13.567492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.063 [2024-12-16 09:54:13.618482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val= 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val= 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val=0x1 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val= 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val= 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val=compare 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val= 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val=software 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val=32 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val=32 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val=1 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val=Yes 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val= 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.333 09:54:13 -- accel/accel.sh@21 -- # val= 00:07:15.333 09:54:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.333 09:54:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.268 09:54:14 -- accel/accel.sh@21 -- # val= 00:07:16.268 09:54:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.268 09:54:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.268 09:54:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.268 09:54:14 -- accel/accel.sh@21 -- # val= 00:07:16.268 09:54:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.268 09:54:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.268 09:54:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.268 09:54:14 -- accel/accel.sh@21 -- # val= 00:07:16.268 09:54:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.268 09:54:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.268 09:54:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.268 09:54:14 -- accel/accel.sh@21 -- # val= 00:07:16.268 09:54:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.268 09:54:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.268 09:54:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.268 09:54:14 -- accel/accel.sh@21 -- # val= 00:07:16.268 09:54:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.268 09:54:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.268 09:54:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.268 09:54:14 -- accel/accel.sh@21 -- # val= 00:07:16.268 09:54:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.268 09:54:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.268 09:54:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.268 09:54:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.268 09:54:14 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:16.268 09:54:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.268 00:07:16.268 real 0m2.814s 00:07:16.268 user 0m2.388s 00:07:16.268 sys 0m0.216s 00:07:16.268 09:54:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.268 ************************************ 00:07:16.268 END TEST accel_compare 00:07:16.268 ************************************ 00:07:16.268 09:54:14 -- common/autotest_common.sh@10 -- # set +x 00:07:16.268 09:54:14 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:16.268 09:54:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:16.268 09:54:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.268 09:54:14 -- common/autotest_common.sh@10 -- # set +x 00:07:16.268 ************************************ 00:07:16.268 START TEST accel_xor 00:07:16.268 ************************************ 00:07:16.268 09:54:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:16.268 09:54:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.268 09:54:14 -- accel/accel.sh@17 -- # local accel_module 00:07:16.268 09:54:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:16.268 09:54:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:16.268 09:54:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.268 09:54:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.268 09:54:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.268 09:54:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.268 09:54:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.268 09:54:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.268 09:54:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.268 09:54:14 -- accel/accel.sh@42 -- # jq -r . 00:07:16.526 [2024-12-16 09:54:14.897026] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.526 [2024-12-16 09:54:14.897119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70905 ] 00:07:16.526 [2024-12-16 09:54:15.031148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.526 [2024-12-16 09:54:15.084404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.902 09:54:16 -- accel/accel.sh@18 -- # out=' 00:07:17.902 SPDK Configuration: 00:07:17.902 Core mask: 0x1 00:07:17.902 00:07:17.902 Accel Perf Configuration: 00:07:17.902 Workload Type: xor 00:07:17.902 Source buffers: 2 00:07:17.902 Transfer size: 4096 bytes 00:07:17.902 Vector count 1 00:07:17.902 Module: software 00:07:17.902 Queue depth: 32 00:07:17.902 Allocate depth: 32 00:07:17.902 # threads/core: 1 00:07:17.902 Run time: 1 seconds 00:07:17.902 Verify: Yes 00:07:17.902 00:07:17.902 Running for 1 seconds... 00:07:17.902 00:07:17.902 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.902 ------------------------------------------------------------------------------------ 00:07:17.902 0,0 289952/s 1132 MiB/s 0 0 00:07:17.902 ==================================================================================== 00:07:17.902 Total 289952/s 1132 MiB/s 0 0' 00:07:17.902 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:17.902 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:17.902 09:54:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:17.902 09:54:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:17.902 09:54:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.902 09:54:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.902 09:54:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.902 09:54:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.902 09:54:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.902 09:54:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.902 09:54:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.902 09:54:16 -- accel/accel.sh@42 -- # jq -r . 00:07:17.902 [2024-12-16 09:54:16.312167] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.902 [2024-12-16 09:54:16.312262] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70925 ] 00:07:17.902 [2024-12-16 09:54:16.447628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.902 [2024-12-16 09:54:16.498499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.160 09:54:16 -- accel/accel.sh@21 -- # val= 00:07:18.160 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.160 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.160 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.160 09:54:16 -- accel/accel.sh@21 -- # val= 00:07:18.160 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.160 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.160 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.161 09:54:16 -- accel/accel.sh@21 -- # val=0x1 00:07:18.161 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.161 09:54:16 -- accel/accel.sh@21 -- # val= 00:07:18.161 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.161 09:54:16 -- accel/accel.sh@21 -- # val= 00:07:18.161 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.161 09:54:16 -- accel/accel.sh@21 -- # val=xor 00:07:18.161 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.161 09:54:16 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.161 09:54:16 -- accel/accel.sh@21 -- # val=2 00:07:18.161 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.161 09:54:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.161 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.161 09:54:16 -- accel/accel.sh@21 -- # val= 00:07:18.161 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.161 09:54:16 -- accel/accel.sh@21 -- # val=software 00:07:18.161 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.161 09:54:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.161 09:54:16 -- accel/accel.sh@21 -- # val=32 00:07:18.161 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.161 09:54:16 -- accel/accel.sh@21 -- # val=32 00:07:18.161 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.161 09:54:16 -- accel/accel.sh@21 -- # val=1 00:07:18.161 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.161 09:54:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.161 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.161 09:54:16 -- accel/accel.sh@21 -- # val=Yes 00:07:18.161 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.161 09:54:16 -- accel/accel.sh@21 -- # val= 00:07:18.161 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.161 09:54:16 -- accel/accel.sh@21 -- # val= 00:07:18.161 09:54:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.161 09:54:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.098 09:54:17 -- accel/accel.sh@21 -- # val= 00:07:19.098 09:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.098 09:54:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.098 09:54:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.098 09:54:17 -- accel/accel.sh@21 -- # val= 00:07:19.098 09:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.098 09:54:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.098 09:54:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.098 09:54:17 -- accel/accel.sh@21 -- # val= 00:07:19.098 09:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.098 09:54:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.098 09:54:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.098 09:54:17 -- accel/accel.sh@21 -- # val= 00:07:19.098 09:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.098 09:54:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.098 09:54:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.098 ************************************ 00:07:19.098 END TEST accel_xor 00:07:19.098 ************************************ 00:07:19.098 09:54:17 -- accel/accel.sh@21 -- # val= 00:07:19.098 09:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.098 09:54:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.098 09:54:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.098 09:54:17 -- accel/accel.sh@21 -- # val= 00:07:19.098 09:54:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.098 09:54:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.098 09:54:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.098 09:54:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.098 09:54:17 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:19.098 09:54:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.098 00:07:19.098 real 0m2.816s 00:07:19.098 user 0m2.406s 00:07:19.098 sys 0m0.209s 00:07:19.098 09:54:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.098 09:54:17 -- common/autotest_common.sh@10 -- # set +x 00:07:19.357 09:54:17 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:19.357 09:54:17 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:19.357 09:54:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.357 09:54:17 -- common/autotest_common.sh@10 -- # set +x 00:07:19.357 ************************************ 00:07:19.357 START TEST accel_xor 00:07:19.357 ************************************ 00:07:19.357 09:54:17 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:19.357 09:54:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.357 09:54:17 -- accel/accel.sh@17 -- # local accel_module 00:07:19.357 09:54:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:19.357 09:54:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:19.357 09:54:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.357 09:54:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.357 09:54:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.357 09:54:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.357 09:54:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.357 09:54:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.357 09:54:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.357 09:54:17 -- accel/accel.sh@42 -- # jq -r . 00:07:19.357 [2024-12-16 09:54:17.762401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.357 [2024-12-16 09:54:17.762491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70954 ] 00:07:19.357 [2024-12-16 09:54:17.891246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.357 [2024-12-16 09:54:17.944648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.733 09:54:19 -- accel/accel.sh@18 -- # out=' 00:07:20.733 SPDK Configuration: 00:07:20.733 Core mask: 0x1 00:07:20.733 00:07:20.733 Accel Perf Configuration: 00:07:20.733 Workload Type: xor 00:07:20.733 Source buffers: 3 00:07:20.733 Transfer size: 4096 bytes 00:07:20.733 Vector count 1 00:07:20.733 Module: software 00:07:20.733 Queue depth: 32 00:07:20.733 Allocate depth: 32 00:07:20.733 # threads/core: 1 00:07:20.733 Run time: 1 seconds 00:07:20.733 Verify: Yes 00:07:20.733 00:07:20.733 Running for 1 seconds... 00:07:20.733 00:07:20.733 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.733 ------------------------------------------------------------------------------------ 00:07:20.733 0,0 276320/s 1079 MiB/s 0 0 00:07:20.733 ==================================================================================== 00:07:20.733 Total 276320/s 1079 MiB/s 0 0' 00:07:20.733 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.733 09:54:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:20.733 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.733 09:54:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:20.733 09:54:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.733 09:54:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.733 09:54:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.733 09:54:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.733 09:54:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.733 09:54:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.733 09:54:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.733 09:54:19 -- accel/accel.sh@42 -- # jq -r . 00:07:20.733 [2024-12-16 09:54:19.153166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.733 [2024-12-16 09:54:19.153268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70973 ] 00:07:20.733 [2024-12-16 09:54:19.288552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.733 [2024-12-16 09:54:19.341270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.992 09:54:19 -- accel/accel.sh@21 -- # val= 00:07:20.992 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.992 09:54:19 -- accel/accel.sh@21 -- # val= 00:07:20.992 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.992 09:54:19 -- accel/accel.sh@21 -- # val=0x1 00:07:20.992 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.992 09:54:19 -- accel/accel.sh@21 -- # val= 00:07:20.992 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.992 09:54:19 -- accel/accel.sh@21 -- # val= 00:07:20.992 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.992 09:54:19 -- accel/accel.sh@21 -- # val=xor 00:07:20.992 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.992 09:54:19 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.992 09:54:19 -- accel/accel.sh@21 -- # val=3 00:07:20.992 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.992 09:54:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.992 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.992 09:54:19 -- accel/accel.sh@21 -- # val= 00:07:20.992 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.992 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.992 09:54:19 -- accel/accel.sh@21 -- # val=software 00:07:20.992 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.993 09:54:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.993 09:54:19 -- accel/accel.sh@21 -- # val=32 00:07:20.993 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.993 09:54:19 -- accel/accel.sh@21 -- # val=32 00:07:20.993 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.993 09:54:19 -- accel/accel.sh@21 -- # val=1 00:07:20.993 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.993 09:54:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.993 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.993 09:54:19 -- accel/accel.sh@21 -- # val=Yes 00:07:20.993 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.993 09:54:19 -- accel/accel.sh@21 -- # val= 00:07:20.993 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:20.993 09:54:19 -- accel/accel.sh@21 -- # val= 00:07:20.993 09:54:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # IFS=: 00:07:20.993 09:54:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.929 09:54:20 -- accel/accel.sh@21 -- # val= 00:07:21.929 09:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.929 09:54:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.929 09:54:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.929 09:54:20 -- accel/accel.sh@21 -- # val= 00:07:21.929 09:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.929 09:54:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.929 09:54:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.929 09:54:20 -- accel/accel.sh@21 -- # val= 00:07:21.929 09:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.929 09:54:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.929 09:54:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.929 09:54:20 -- accel/accel.sh@21 -- # val= 00:07:21.929 09:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.929 09:54:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.929 09:54:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.929 09:54:20 -- accel/accel.sh@21 -- # val= 00:07:21.929 09:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.929 09:54:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.929 09:54:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.929 09:54:20 -- accel/accel.sh@21 -- # val= 00:07:21.929 09:54:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.929 09:54:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.929 09:54:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.929 09:54:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.929 09:54:20 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:21.929 09:54:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.929 00:07:21.929 real 0m2.796s 00:07:21.929 user 0m2.388s 00:07:21.929 sys 0m0.205s 00:07:21.929 09:54:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.929 09:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:21.929 ************************************ 00:07:21.929 END TEST accel_xor 00:07:21.929 ************************************ 00:07:22.188 09:54:20 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:22.188 09:54:20 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:22.188 09:54:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.188 09:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:22.188 ************************************ 00:07:22.188 START TEST accel_dif_verify 00:07:22.188 ************************************ 00:07:22.188 09:54:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:22.188 09:54:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.188 09:54:20 -- accel/accel.sh@17 -- # local accel_module 00:07:22.188 09:54:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:22.188 09:54:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:22.188 09:54:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.188 09:54:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.188 09:54:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.188 09:54:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.188 09:54:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.188 09:54:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.188 09:54:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.188 09:54:20 -- accel/accel.sh@42 -- # jq -r . 00:07:22.188 [2024-12-16 09:54:20.610821] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.188 [2024-12-16 09:54:20.610910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71008 ] 00:07:22.188 [2024-12-16 09:54:20.738244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.188 [2024-12-16 09:54:20.790271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.565 09:54:21 -- accel/accel.sh@18 -- # out=' 00:07:23.565 SPDK Configuration: 00:07:23.565 Core mask: 0x1 00:07:23.565 00:07:23.565 Accel Perf Configuration: 00:07:23.565 Workload Type: dif_verify 00:07:23.565 Vector size: 4096 bytes 00:07:23.565 Transfer size: 4096 bytes 00:07:23.565 Block size: 512 bytes 00:07:23.565 Metadata size: 8 bytes 00:07:23.565 Vector count 1 00:07:23.565 Module: software 00:07:23.565 Queue depth: 32 00:07:23.565 Allocate depth: 32 00:07:23.565 # threads/core: 1 00:07:23.565 Run time: 1 seconds 00:07:23.565 Verify: No 00:07:23.565 00:07:23.565 Running for 1 seconds... 00:07:23.565 00:07:23.565 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.565 ------------------------------------------------------------------------------------ 00:07:23.565 0,0 125344/s 497 MiB/s 0 0 00:07:23.565 ==================================================================================== 00:07:23.565 Total 125344/s 489 MiB/s 0 0' 00:07:23.565 09:54:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.565 09:54:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.565 09:54:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:23.565 09:54:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:23.565 09:54:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.565 09:54:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.565 09:54:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.565 09:54:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.565 09:54:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.565 09:54:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.565 09:54:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.565 09:54:21 -- accel/accel.sh@42 -- # jq -r . 00:07:23.565 [2024-12-16 09:54:22.004656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.565 [2024-12-16 09:54:22.004754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71022 ] 00:07:23.565 [2024-12-16 09:54:22.141619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.824 [2024-12-16 09:54:22.193930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.824 09:54:22 -- accel/accel.sh@21 -- # val= 00:07:23.824 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.824 09:54:22 -- accel/accel.sh@21 -- # val= 00:07:23.824 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.824 09:54:22 -- accel/accel.sh@21 -- # val=0x1 00:07:23.824 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.824 09:54:22 -- accel/accel.sh@21 -- # val= 00:07:23.824 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.824 09:54:22 -- accel/accel.sh@21 -- # val= 00:07:23.824 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.824 09:54:22 -- accel/accel.sh@21 -- # val=dif_verify 00:07:23.824 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.824 09:54:22 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.824 09:54:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.824 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.824 09:54:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.824 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.824 09:54:22 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:23.824 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.824 09:54:22 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:23.824 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.824 09:54:22 -- accel/accel.sh@21 -- # val= 00:07:23.824 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.824 09:54:22 -- accel/accel.sh@21 -- # val=software 00:07:23.824 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.824 09:54:22 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.824 09:54:22 -- accel/accel.sh@21 -- # val=32 00:07:23.824 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.824 09:54:22 -- accel/accel.sh@21 -- # val=32 00:07:23.824 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.824 09:54:22 -- accel/accel.sh@21 -- # val=1 00:07:23.824 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.824 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.825 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.825 09:54:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.825 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.825 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.825 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.825 09:54:22 -- accel/accel.sh@21 -- # val=No 00:07:23.825 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.825 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.825 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.825 09:54:22 -- accel/accel.sh@21 -- # val= 00:07:23.825 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.825 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.825 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:23.825 09:54:22 -- accel/accel.sh@21 -- # val= 00:07:23.825 09:54:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.825 09:54:22 -- accel/accel.sh@20 -- # IFS=: 00:07:23.825 09:54:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.760 09:54:23 -- accel/accel.sh@21 -- # val= 00:07:24.760 09:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.760 09:54:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.760 09:54:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.019 09:54:23 -- accel/accel.sh@21 -- # val= 00:07:25.019 09:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.019 09:54:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.019 09:54:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.019 09:54:23 -- accel/accel.sh@21 -- # val= 00:07:25.019 09:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.019 09:54:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.019 09:54:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.019 09:54:23 -- accel/accel.sh@21 -- # val= 00:07:25.019 09:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.019 09:54:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.019 09:54:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.019 09:54:23 -- accel/accel.sh@21 -- # val= 00:07:25.019 09:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.019 09:54:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.019 09:54:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.019 09:54:23 -- accel/accel.sh@21 -- # val= 00:07:25.019 09:54:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.019 09:54:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.019 09:54:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.019 09:54:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.019 09:54:23 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:25.019 09:54:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.019 00:07:25.019 real 0m2.800s 00:07:25.019 user 0m2.379s 00:07:25.019 sys 0m0.217s 00:07:25.019 09:54:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.019 09:54:23 -- common/autotest_common.sh@10 -- # set +x 00:07:25.019 ************************************ 00:07:25.019 END TEST accel_dif_verify 00:07:25.019 ************************************ 00:07:25.020 09:54:23 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:25.020 09:54:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:25.020 09:54:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.020 09:54:23 -- common/autotest_common.sh@10 -- # set +x 00:07:25.020 ************************************ 00:07:25.020 START TEST accel_dif_generate 00:07:25.020 ************************************ 00:07:25.020 09:54:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:25.020 09:54:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.020 09:54:23 -- accel/accel.sh@17 -- # local accel_module 00:07:25.020 09:54:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:25.020 09:54:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:25.020 09:54:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.020 09:54:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.020 09:54:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.020 09:54:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.020 09:54:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.020 09:54:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.020 09:54:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.020 09:54:23 -- accel/accel.sh@42 -- # jq -r . 00:07:25.020 [2024-12-16 09:54:23.470154] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.020 [2024-12-16 09:54:23.470247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71062 ] 00:07:25.020 [2024-12-16 09:54:23.605753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.279 [2024-12-16 09:54:23.657904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.680 09:54:24 -- accel/accel.sh@18 -- # out=' 00:07:26.680 SPDK Configuration: 00:07:26.680 Core mask: 0x1 00:07:26.680 00:07:26.680 Accel Perf Configuration: 00:07:26.680 Workload Type: dif_generate 00:07:26.680 Vector size: 4096 bytes 00:07:26.680 Transfer size: 4096 bytes 00:07:26.680 Block size: 512 bytes 00:07:26.680 Metadata size: 8 bytes 00:07:26.680 Vector count 1 00:07:26.680 Module: software 00:07:26.680 Queue depth: 32 00:07:26.680 Allocate depth: 32 00:07:26.680 # threads/core: 1 00:07:26.680 Run time: 1 seconds 00:07:26.680 Verify: No 00:07:26.680 00:07:26.680 Running for 1 seconds... 00:07:26.680 00:07:26.680 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.680 ------------------------------------------------------------------------------------ 00:07:26.680 0,0 152320/s 604 MiB/s 0 0 00:07:26.680 ==================================================================================== 00:07:26.680 Total 152320/s 595 MiB/s 0 0' 00:07:26.680 09:54:24 -- accel/accel.sh@20 -- # IFS=: 00:07:26.680 09:54:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:26.680 09:54:24 -- accel/accel.sh@20 -- # read -r var val 00:07:26.680 09:54:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.680 09:54:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:26.680 09:54:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.680 09:54:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.680 09:54:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.680 09:54:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.680 09:54:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.680 09:54:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.680 09:54:24 -- accel/accel.sh@42 -- # jq -r . 00:07:26.680 [2024-12-16 09:54:24.865330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.680 [2024-12-16 09:54:24.865462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71076 ] 00:07:26.680 [2024-12-16 09:54:24.998792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.680 [2024-12-16 09:54:25.052236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.680 09:54:25 -- accel/accel.sh@21 -- # val= 00:07:26.680 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.680 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.680 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.680 09:54:25 -- accel/accel.sh@21 -- # val= 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val=0x1 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val= 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val= 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val=dif_generate 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val= 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val=software 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val=32 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val=32 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val=1 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val=No 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val= 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:26.681 09:54:25 -- accel/accel.sh@21 -- # val= 00:07:26.681 09:54:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # IFS=: 00:07:26.681 09:54:25 -- accel/accel.sh@20 -- # read -r var val 00:07:27.618 09:54:26 -- accel/accel.sh@21 -- # val= 00:07:27.618 09:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.618 09:54:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.618 09:54:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.618 09:54:26 -- accel/accel.sh@21 -- # val= 00:07:27.877 09:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.877 09:54:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.877 09:54:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.877 09:54:26 -- accel/accel.sh@21 -- # val= 00:07:27.877 09:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.877 09:54:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.877 09:54:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.877 09:54:26 -- accel/accel.sh@21 -- # val= 00:07:27.877 09:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.877 09:54:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.877 09:54:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.877 09:54:26 -- accel/accel.sh@21 -- # val= 00:07:27.877 09:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.877 09:54:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.877 09:54:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.877 09:54:26 -- accel/accel.sh@21 -- # val= 00:07:27.877 09:54:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.877 09:54:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.877 09:54:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.877 09:54:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.877 09:54:26 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:27.877 09:54:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.877 00:07:27.877 real 0m2.803s 00:07:27.877 user 0m2.397s 00:07:27.877 sys 0m0.208s 00:07:27.877 09:54:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.877 09:54:26 -- common/autotest_common.sh@10 -- # set +x 00:07:27.877 ************************************ 00:07:27.877 END TEST accel_dif_generate 00:07:27.877 ************************************ 00:07:27.877 09:54:26 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:27.877 09:54:26 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:27.877 09:54:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.877 09:54:26 -- common/autotest_common.sh@10 -- # set +x 00:07:27.877 ************************************ 00:07:27.877 START TEST accel_dif_generate_copy 00:07:27.877 ************************************ 00:07:27.877 09:54:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:27.877 09:54:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.877 09:54:26 -- accel/accel.sh@17 -- # local accel_module 00:07:27.877 09:54:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:27.877 09:54:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:27.877 09:54:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.877 09:54:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.877 09:54:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.877 09:54:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.877 09:54:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.877 09:54:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.878 09:54:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.878 09:54:26 -- accel/accel.sh@42 -- # jq -r . 00:07:27.878 [2024-12-16 09:54:26.326550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.878 [2024-12-16 09:54:26.326848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71110 ] 00:07:27.878 [2024-12-16 09:54:26.472977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.136 [2024-12-16 09:54:26.525819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.511 09:54:27 -- accel/accel.sh@18 -- # out=' 00:07:29.511 SPDK Configuration: 00:07:29.511 Core mask: 0x1 00:07:29.511 00:07:29.511 Accel Perf Configuration: 00:07:29.511 Workload Type: dif_generate_copy 00:07:29.511 Vector size: 4096 bytes 00:07:29.511 Transfer size: 4096 bytes 00:07:29.511 Vector count 1 00:07:29.511 Module: software 00:07:29.511 Queue depth: 32 00:07:29.511 Allocate depth: 32 00:07:29.511 # threads/core: 1 00:07:29.511 Run time: 1 seconds 00:07:29.511 Verify: No 00:07:29.511 00:07:29.511 Running for 1 seconds... 00:07:29.511 00:07:29.511 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.511 ------------------------------------------------------------------------------------ 00:07:29.511 0,0 116000/s 460 MiB/s 0 0 00:07:29.511 ==================================================================================== 00:07:29.511 Total 116000/s 453 MiB/s 0 0' 00:07:29.511 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.511 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.511 09:54:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:29.511 09:54:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.511 09:54:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:29.511 09:54:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.511 09:54:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.511 09:54:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.511 09:54:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.511 09:54:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.511 09:54:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.511 09:54:27 -- accel/accel.sh@42 -- # jq -r . 00:07:29.511 [2024-12-16 09:54:27.740332] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.511 [2024-12-16 09:54:27.740446] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71130 ] 00:07:29.511 [2024-12-16 09:54:27.876715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.511 [2024-12-16 09:54:27.928771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.511 09:54:27 -- accel/accel.sh@21 -- # val= 00:07:29.511 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.511 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.511 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.511 09:54:27 -- accel/accel.sh@21 -- # val= 00:07:29.511 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.511 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.511 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.511 09:54:27 -- accel/accel.sh@21 -- # val=0x1 00:07:29.511 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.511 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.511 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.511 09:54:27 -- accel/accel.sh@21 -- # val= 00:07:29.511 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.511 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.511 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.511 09:54:27 -- accel/accel.sh@21 -- # val= 00:07:29.511 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.512 09:54:27 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:29.512 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.512 09:54:27 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.512 09:54:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.512 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.512 09:54:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.512 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.512 09:54:27 -- accel/accel.sh@21 -- # val= 00:07:29.512 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.512 09:54:27 -- accel/accel.sh@21 -- # val=software 00:07:29.512 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.512 09:54:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.512 09:54:27 -- accel/accel.sh@21 -- # val=32 00:07:29.512 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.512 09:54:27 -- accel/accel.sh@21 -- # val=32 00:07:29.512 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.512 09:54:27 -- accel/accel.sh@21 -- # val=1 00:07:29.512 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.512 09:54:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.512 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.512 09:54:27 -- accel/accel.sh@21 -- # val=No 00:07:29.512 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.512 09:54:27 -- accel/accel.sh@21 -- # val= 00:07:29.512 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:29.512 09:54:27 -- accel/accel.sh@21 -- # val= 00:07:29.512 09:54:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # IFS=: 00:07:29.512 09:54:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.889 09:54:29 -- accel/accel.sh@21 -- # val= 00:07:30.889 09:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.889 09:54:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.889 09:54:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.889 09:54:29 -- accel/accel.sh@21 -- # val= 00:07:30.889 09:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.889 09:54:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.889 09:54:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.889 09:54:29 -- accel/accel.sh@21 -- # val= 00:07:30.889 09:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.889 09:54:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.889 09:54:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.889 09:54:29 -- accel/accel.sh@21 -- # val= 00:07:30.889 09:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.889 09:54:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.889 ************************************ 00:07:30.889 END TEST accel_dif_generate_copy 00:07:30.889 ************************************ 00:07:30.889 09:54:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.889 09:54:29 -- accel/accel.sh@21 -- # val= 00:07:30.889 09:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.889 09:54:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.889 09:54:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.889 09:54:29 -- accel/accel.sh@21 -- # val= 00:07:30.889 09:54:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.889 09:54:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.889 09:54:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.889 09:54:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.889 09:54:29 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:30.889 09:54:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.889 00:07:30.889 real 0m2.837s 00:07:30.889 user 0m2.408s 00:07:30.889 sys 0m0.230s 00:07:30.889 09:54:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.889 09:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:30.889 09:54:29 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:30.889 09:54:29 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.889 09:54:29 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:30.889 09:54:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.889 09:54:29 -- common/autotest_common.sh@10 -- # set +x 00:07:30.889 ************************************ 00:07:30.889 START TEST accel_comp 00:07:30.889 ************************************ 00:07:30.889 09:54:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.889 09:54:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.889 09:54:29 -- accel/accel.sh@17 -- # local accel_module 00:07:30.889 09:54:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.889 09:54:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.889 09:54:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.889 09:54:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.889 09:54:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.889 09:54:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.889 09:54:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.889 09:54:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.889 09:54:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.889 09:54:29 -- accel/accel.sh@42 -- # jq -r . 00:07:30.889 [2024-12-16 09:54:29.209439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.889 [2024-12-16 09:54:29.209532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71159 ] 00:07:30.889 [2024-12-16 09:54:29.344786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.889 [2024-12-16 09:54:29.409355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.265 09:54:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:32.265 00:07:32.265 SPDK Configuration: 00:07:32.265 Core mask: 0x1 00:07:32.265 00:07:32.265 Accel Perf Configuration: 00:07:32.265 Workload Type: compress 00:07:32.265 Transfer size: 4096 bytes 00:07:32.265 Vector count 1 00:07:32.265 Module: software 00:07:32.265 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.265 Queue depth: 32 00:07:32.265 Allocate depth: 32 00:07:32.265 # threads/core: 1 00:07:32.265 Run time: 1 seconds 00:07:32.265 Verify: No 00:07:32.265 00:07:32.265 Running for 1 seconds... 00:07:32.265 00:07:32.265 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.265 ------------------------------------------------------------------------------------ 00:07:32.265 0,0 59328/s 247 MiB/s 0 0 00:07:32.265 ==================================================================================== 00:07:32.265 Total 59328/s 231 MiB/s 0 0' 00:07:32.265 09:54:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.265 09:54:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.265 09:54:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.265 09:54:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.265 09:54:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.265 09:54:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.265 09:54:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.265 09:54:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.265 09:54:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.265 09:54:30 -- accel/accel.sh@42 -- # jq -r . 00:07:32.265 [2024-12-16 09:54:30.627378] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.265 [2024-12-16 09:54:30.627472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71184 ] 00:07:32.265 [2024-12-16 09:54:30.764219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.265 [2024-12-16 09:54:30.815328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.265 09:54:30 -- accel/accel.sh@21 -- # val= 00:07:32.265 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.265 09:54:30 -- accel/accel.sh@21 -- # val= 00:07:32.265 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.265 09:54:30 -- accel/accel.sh@21 -- # val= 00:07:32.265 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.265 09:54:30 -- accel/accel.sh@21 -- # val=0x1 00:07:32.265 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.265 09:54:30 -- accel/accel.sh@21 -- # val= 00:07:32.265 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.265 09:54:30 -- accel/accel.sh@21 -- # val= 00:07:32.265 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.265 09:54:30 -- accel/accel.sh@21 -- # val=compress 00:07:32.265 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.265 09:54:30 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.265 09:54:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.265 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.265 09:54:30 -- accel/accel.sh@21 -- # val= 00:07:32.265 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.265 09:54:30 -- accel/accel.sh@21 -- # val=software 00:07:32.265 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.265 09:54:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.265 09:54:30 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.265 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.265 09:54:30 -- accel/accel.sh@21 -- # val=32 00:07:32.265 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.265 09:54:30 -- accel/accel.sh@21 -- # val=32 00:07:32.265 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.265 09:54:30 -- accel/accel.sh@21 -- # val=1 00:07:32.265 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.265 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.265 09:54:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.524 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.524 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.524 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.524 09:54:30 -- accel/accel.sh@21 -- # val=No 00:07:32.524 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.524 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.524 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.524 09:54:30 -- accel/accel.sh@21 -- # val= 00:07:32.524 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.524 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.524 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.524 09:54:30 -- accel/accel.sh@21 -- # val= 00:07:32.524 09:54:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.524 09:54:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.524 09:54:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.461 09:54:31 -- accel/accel.sh@21 -- # val= 00:07:33.461 09:54:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.461 09:54:32 -- accel/accel.sh@20 -- # IFS=: 00:07:33.461 09:54:32 -- accel/accel.sh@20 -- # read -r var val 00:07:33.461 09:54:32 -- accel/accel.sh@21 -- # val= 00:07:33.461 09:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.461 09:54:32 -- accel/accel.sh@20 -- # IFS=: 00:07:33.461 09:54:32 -- accel/accel.sh@20 -- # read -r var val 00:07:33.461 09:54:32 -- accel/accel.sh@21 -- # val= 00:07:33.461 09:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.461 09:54:32 -- accel/accel.sh@20 -- # IFS=: 00:07:33.461 09:54:32 -- accel/accel.sh@20 -- # read -r var val 00:07:33.461 09:54:32 -- accel/accel.sh@21 -- # val= 00:07:33.461 09:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.461 09:54:32 -- accel/accel.sh@20 -- # IFS=: 00:07:33.461 09:54:32 -- accel/accel.sh@20 -- # read -r var val 00:07:33.461 09:54:32 -- accel/accel.sh@21 -- # val= 00:07:33.461 09:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.461 09:54:32 -- accel/accel.sh@20 -- # IFS=: 00:07:33.461 09:54:32 -- accel/accel.sh@20 -- # read -r var val 00:07:33.461 09:54:32 -- accel/accel.sh@21 -- # val= 00:07:33.461 09:54:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.461 09:54:32 -- accel/accel.sh@20 -- # IFS=: 00:07:33.461 09:54:32 -- accel/accel.sh@20 -- # read -r var val 00:07:33.461 09:54:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.461 09:54:32 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:33.461 09:54:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.461 00:07:33.461 real 0m2.823s 00:07:33.461 user 0m2.397s 00:07:33.461 sys 0m0.225s 00:07:33.461 09:54:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.461 ************************************ 00:07:33.461 END TEST accel_comp 00:07:33.461 ************************************ 00:07:33.461 09:54:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.461 09:54:32 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.461 09:54:32 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:33.461 09:54:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.461 09:54:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.461 ************************************ 00:07:33.461 START TEST accel_decomp 00:07:33.461 ************************************ 00:07:33.461 09:54:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.461 09:54:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.461 09:54:32 -- accel/accel.sh@17 -- # local accel_module 00:07:33.461 09:54:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.461 09:54:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.461 09:54:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.461 09:54:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.461 09:54:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.461 09:54:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.461 09:54:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.461 09:54:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.461 09:54:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.461 09:54:32 -- accel/accel.sh@42 -- # jq -r . 00:07:33.461 [2024-12-16 09:54:32.083386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.462 [2024-12-16 09:54:32.083480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71213 ] 00:07:33.720 [2024-12-16 09:54:32.215581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.720 [2024-12-16 09:54:32.267667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.097 09:54:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.097 00:07:35.097 SPDK Configuration: 00:07:35.097 Core mask: 0x1 00:07:35.097 00:07:35.097 Accel Perf Configuration: 00:07:35.097 Workload Type: decompress 00:07:35.097 Transfer size: 4096 bytes 00:07:35.097 Vector count 1 00:07:35.097 Module: software 00:07:35.097 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.097 Queue depth: 32 00:07:35.097 Allocate depth: 32 00:07:35.097 # threads/core: 1 00:07:35.097 Run time: 1 seconds 00:07:35.097 Verify: Yes 00:07:35.097 00:07:35.097 Running for 1 seconds... 00:07:35.097 00:07:35.097 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.097 ------------------------------------------------------------------------------------ 00:07:35.097 0,0 85088/s 156 MiB/s 0 0 00:07:35.097 ==================================================================================== 00:07:35.097 Total 85088/s 332 MiB/s 0 0' 00:07:35.097 09:54:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:35.097 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.097 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.097 09:54:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:35.097 09:54:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.097 09:54:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.097 09:54:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.097 09:54:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.097 09:54:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.097 09:54:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.097 09:54:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.097 09:54:33 -- accel/accel.sh@42 -- # jq -r . 00:07:35.097 [2024-12-16 09:54:33.478081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.097 [2024-12-16 09:54:33.478177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71227 ] 00:07:35.097 [2024-12-16 09:54:33.610862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.097 [2024-12-16 09:54:33.663680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.355 09:54:33 -- accel/accel.sh@21 -- # val= 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val= 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val= 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val=0x1 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val= 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val= 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val=decompress 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val= 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val=software 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val=32 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val=32 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val=1 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val=Yes 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val= 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.356 09:54:33 -- accel/accel.sh@21 -- # val= 00:07:35.356 09:54:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.356 09:54:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.292 09:54:34 -- accel/accel.sh@21 -- # val= 00:07:36.292 09:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.292 09:54:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.292 09:54:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.292 09:54:34 -- accel/accel.sh@21 -- # val= 00:07:36.292 09:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.292 09:54:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.292 09:54:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.292 09:54:34 -- accel/accel.sh@21 -- # val= 00:07:36.292 09:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.292 09:54:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.292 09:54:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.292 09:54:34 -- accel/accel.sh@21 -- # val= 00:07:36.292 ************************************ 00:07:36.292 END TEST accel_decomp 00:07:36.292 ************************************ 00:07:36.292 09:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.292 09:54:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.292 09:54:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.292 09:54:34 -- accel/accel.sh@21 -- # val= 00:07:36.292 09:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.292 09:54:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.292 09:54:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.292 09:54:34 -- accel/accel.sh@21 -- # val= 00:07:36.292 09:54:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.292 09:54:34 -- accel/accel.sh@20 -- # IFS=: 00:07:36.292 09:54:34 -- accel/accel.sh@20 -- # read -r var val 00:07:36.292 09:54:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.292 09:54:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:36.292 09:54:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.292 00:07:36.292 real 0m2.797s 00:07:36.292 user 0m2.387s 00:07:36.292 sys 0m0.210s 00:07:36.292 09:54:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.292 09:54:34 -- common/autotest_common.sh@10 -- # set +x 00:07:36.292 09:54:34 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:36.292 09:54:34 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:36.292 09:54:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.292 09:54:34 -- common/autotest_common.sh@10 -- # set +x 00:07:36.292 ************************************ 00:07:36.292 START TEST accel_decmop_full 00:07:36.292 ************************************ 00:07:36.292 09:54:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:36.292 09:54:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.292 09:54:34 -- accel/accel.sh@17 -- # local accel_module 00:07:36.292 09:54:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:36.292 09:54:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:36.292 09:54:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.292 09:54:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.292 09:54:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.292 09:54:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.292 09:54:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.292 09:54:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.292 09:54:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.292 09:54:34 -- accel/accel.sh@42 -- # jq -r . 00:07:36.551 [2024-12-16 09:54:34.932145] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.551 [2024-12-16 09:54:34.932255] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71267 ] 00:07:36.551 [2024-12-16 09:54:35.069270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.551 [2024-12-16 09:54:35.126526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.965 09:54:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:37.965 00:07:37.965 SPDK Configuration: 00:07:37.965 Core mask: 0x1 00:07:37.965 00:07:37.965 Accel Perf Configuration: 00:07:37.965 Workload Type: decompress 00:07:37.965 Transfer size: 111250 bytes 00:07:37.965 Vector count 1 00:07:37.965 Module: software 00:07:37.965 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.965 Queue depth: 32 00:07:37.965 Allocate depth: 32 00:07:37.965 # threads/core: 1 00:07:37.965 Run time: 1 seconds 00:07:37.965 Verify: Yes 00:07:37.965 00:07:37.965 Running for 1 seconds... 00:07:37.965 00:07:37.965 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:37.965 ------------------------------------------------------------------------------------ 00:07:37.965 0,0 5568/s 230 MiB/s 0 0 00:07:37.965 ==================================================================================== 00:07:37.965 Total 5568/s 590 MiB/s 0 0' 00:07:37.965 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:37.965 09:54:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:37.965 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:37.965 09:54:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:37.965 09:54:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.965 09:54:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.965 09:54:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.965 09:54:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.965 09:54:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.965 09:54:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.965 09:54:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.965 09:54:36 -- accel/accel.sh@42 -- # jq -r . 00:07:37.965 [2024-12-16 09:54:36.360998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.965 [2024-12-16 09:54:36.361264] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71281 ] 00:07:37.965 [2024-12-16 09:54:36.497613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.965 [2024-12-16 09:54:36.548722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val= 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val= 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val= 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val=0x1 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val= 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val= 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val=decompress 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val= 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val=software 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val=32 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val=32 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val=1 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val=Yes 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val= 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.224 09:54:36 -- accel/accel.sh@21 -- # val= 00:07:38.224 09:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.224 09:54:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.160 09:54:37 -- accel/accel.sh@21 -- # val= 00:07:39.160 09:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.160 09:54:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.160 09:54:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.160 09:54:37 -- accel/accel.sh@21 -- # val= 00:07:39.160 09:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.160 09:54:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.160 09:54:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.160 09:54:37 -- accel/accel.sh@21 -- # val= 00:07:39.160 09:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.160 09:54:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.160 09:54:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.160 09:54:37 -- accel/accel.sh@21 -- # val= 00:07:39.160 09:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.160 09:54:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.160 09:54:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.160 09:54:37 -- accel/accel.sh@21 -- # val= 00:07:39.160 09:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.160 09:54:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.160 09:54:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.160 09:54:37 -- accel/accel.sh@21 -- # val= 00:07:39.160 09:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.160 09:54:37 -- accel/accel.sh@20 -- # IFS=: 00:07:39.160 09:54:37 -- accel/accel.sh@20 -- # read -r var val 00:07:39.160 09:54:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:39.160 09:54:37 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:39.160 09:54:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.160 00:07:39.160 real 0m2.851s 00:07:39.160 user 0m2.425s 00:07:39.160 sys 0m0.223s 00:07:39.160 09:54:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.160 ************************************ 00:07:39.160 END TEST accel_decmop_full 00:07:39.160 ************************************ 00:07:39.160 09:54:37 -- common/autotest_common.sh@10 -- # set +x 00:07:39.420 09:54:37 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:39.420 09:54:37 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:39.420 09:54:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.420 09:54:37 -- common/autotest_common.sh@10 -- # set +x 00:07:39.420 ************************************ 00:07:39.420 START TEST accel_decomp_mcore 00:07:39.420 ************************************ 00:07:39.420 09:54:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:39.420 09:54:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.420 09:54:37 -- accel/accel.sh@17 -- # local accel_module 00:07:39.420 09:54:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:39.420 09:54:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:39.420 09:54:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.420 09:54:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.420 09:54:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.420 09:54:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.420 09:54:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.420 09:54:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.420 09:54:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.420 09:54:37 -- accel/accel.sh@42 -- # jq -r . 00:07:39.420 [2024-12-16 09:54:37.828410] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.420 [2024-12-16 09:54:37.828643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71321 ] 00:07:39.420 [2024-12-16 09:54:37.964620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.420 [2024-12-16 09:54:38.020624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.420 [2024-12-16 09:54:38.020764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.420 [2024-12-16 09:54:38.020893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.420 [2024-12-16 09:54:38.021211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.797 09:54:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:40.797 00:07:40.797 SPDK Configuration: 00:07:40.797 Core mask: 0xf 00:07:40.797 00:07:40.797 Accel Perf Configuration: 00:07:40.797 Workload Type: decompress 00:07:40.797 Transfer size: 4096 bytes 00:07:40.797 Vector count 1 00:07:40.797 Module: software 00:07:40.797 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.797 Queue depth: 32 00:07:40.797 Allocate depth: 32 00:07:40.797 # threads/core: 1 00:07:40.797 Run time: 1 seconds 00:07:40.797 Verify: Yes 00:07:40.797 00:07:40.797 Running for 1 seconds... 00:07:40.797 00:07:40.797 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:40.797 ------------------------------------------------------------------------------------ 00:07:40.797 0,0 66944/s 123 MiB/s 0 0 00:07:40.797 3,0 64032/s 118 MiB/s 0 0 00:07:40.797 2,0 64480/s 118 MiB/s 0 0 00:07:40.797 1,0 64640/s 119 MiB/s 0 0 00:07:40.797 ==================================================================================== 00:07:40.797 Total 260096/s 1016 MiB/s 0 0' 00:07:40.797 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:40.797 09:54:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:40.797 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:40.797 09:54:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:40.797 09:54:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.797 09:54:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.797 09:54:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.797 09:54:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.797 09:54:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.797 09:54:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.797 09:54:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.797 09:54:39 -- accel/accel.sh@42 -- # jq -r . 00:07:40.797 [2024-12-16 09:54:39.246277] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.797 [2024-12-16 09:54:39.246404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71338 ] 00:07:40.797 [2024-12-16 09:54:39.381249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.058 [2024-12-16 09:54:39.438716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.058 [2024-12-16 09:54:39.438836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.058 [2024-12-16 09:54:39.438946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.058 [2024-12-16 09:54:39.438946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val= 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val= 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val= 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val=0xf 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val= 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val= 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val=decompress 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val= 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val=software 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val=32 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val=32 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val=1 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val=Yes 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val= 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.058 09:54:39 -- accel/accel.sh@21 -- # val= 00:07:41.058 09:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.058 09:54:39 -- accel/accel.sh@20 -- # read -r var val 00:07:42.436 09:54:40 -- accel/accel.sh@21 -- # val= 00:07:42.436 09:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.436 09:54:40 -- accel/accel.sh@21 -- # val= 00:07:42.436 09:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.436 09:54:40 -- accel/accel.sh@21 -- # val= 00:07:42.436 09:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.436 09:54:40 -- accel/accel.sh@21 -- # val= 00:07:42.436 09:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.436 09:54:40 -- accel/accel.sh@21 -- # val= 00:07:42.436 09:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.436 09:54:40 -- accel/accel.sh@21 -- # val= 00:07:42.436 09:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.436 09:54:40 -- accel/accel.sh@21 -- # val= 00:07:42.436 09:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.436 09:54:40 -- accel/accel.sh@21 -- # val= 00:07:42.436 09:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.436 09:54:40 -- accel/accel.sh@21 -- # val= 00:07:42.436 09:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # IFS=: 00:07:42.436 09:54:40 -- accel/accel.sh@20 -- # read -r var val 00:07:42.436 09:54:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:42.436 09:54:40 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:42.436 09:54:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.436 00:07:42.436 real 0m2.840s 00:07:42.436 user 0m9.177s 00:07:42.436 sys 0m0.248s 00:07:42.436 09:54:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.436 ************************************ 00:07:42.436 END TEST accel_decomp_mcore 00:07:42.436 ************************************ 00:07:42.436 09:54:40 -- common/autotest_common.sh@10 -- # set +x 00:07:42.436 09:54:40 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.436 09:54:40 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:42.436 09:54:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.436 09:54:40 -- common/autotest_common.sh@10 -- # set +x 00:07:42.436 ************************************ 00:07:42.436 START TEST accel_decomp_full_mcore 00:07:42.436 ************************************ 00:07:42.436 09:54:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.436 09:54:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:42.436 09:54:40 -- accel/accel.sh@17 -- # local accel_module 00:07:42.436 09:54:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.436 09:54:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.436 09:54:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.436 09:54:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.436 09:54:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.436 09:54:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.436 09:54:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.436 09:54:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.436 09:54:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.436 09:54:40 -- accel/accel.sh@42 -- # jq -r . 00:07:42.436 [2024-12-16 09:54:40.714803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.436 [2024-12-16 09:54:40.714890] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71379 ] 00:07:42.436 [2024-12-16 09:54:40.850175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.436 [2024-12-16 09:54:40.912693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.436 [2024-12-16 09:54:40.912799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.436 [2024-12-16 09:54:40.913067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.436 [2024-12-16 09:54:40.912944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.815 09:54:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:43.815 00:07:43.815 SPDK Configuration: 00:07:43.815 Core mask: 0xf 00:07:43.815 00:07:43.815 Accel Perf Configuration: 00:07:43.815 Workload Type: decompress 00:07:43.815 Transfer size: 111250 bytes 00:07:43.815 Vector count 1 00:07:43.815 Module: software 00:07:43.815 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:43.815 Queue depth: 32 00:07:43.815 Allocate depth: 32 00:07:43.815 # threads/core: 1 00:07:43.815 Run time: 1 seconds 00:07:43.815 Verify: Yes 00:07:43.815 00:07:43.815 Running for 1 seconds... 00:07:43.815 00:07:43.815 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:43.815 ------------------------------------------------------------------------------------ 00:07:43.815 0,0 5184/s 214 MiB/s 0 0 00:07:43.815 3,0 5184/s 214 MiB/s 0 0 00:07:43.815 2,0 5184/s 214 MiB/s 0 0 00:07:43.815 1,0 5184/s 214 MiB/s 0 0 00:07:43.815 ==================================================================================== 00:07:43.815 Total 20736/s 2200 MiB/s 0 0' 00:07:43.815 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.816 09:54:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:43.816 09:54:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.816 09:54:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.816 09:54:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.816 09:54:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.816 09:54:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.816 09:54:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.816 09:54:42 -- accel/accel.sh@42 -- # jq -r . 00:07:43.816 [2024-12-16 09:54:42.151840] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.816 [2024-12-16 09:54:42.152087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71401 ] 00:07:43.816 [2024-12-16 09:54:42.287793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.816 [2024-12-16 09:54:42.342577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.816 [2024-12-16 09:54:42.342653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.816 [2024-12-16 09:54:42.342767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.816 [2024-12-16 09:54:42.342772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val= 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val= 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val= 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val=0xf 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val= 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val= 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val=decompress 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val= 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val=software 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val=32 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val=32 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val=1 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val=Yes 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val= 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:43.816 09:54:42 -- accel/accel.sh@21 -- # val= 00:07:43.816 09:54:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # IFS=: 00:07:43.816 09:54:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.196 09:54:43 -- accel/accel.sh@21 -- # val= 00:07:45.196 09:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.196 09:54:43 -- accel/accel.sh@21 -- # val= 00:07:45.196 09:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.196 09:54:43 -- accel/accel.sh@21 -- # val= 00:07:45.196 09:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.196 09:54:43 -- accel/accel.sh@21 -- # val= 00:07:45.196 09:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.196 09:54:43 -- accel/accel.sh@21 -- # val= 00:07:45.196 09:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.196 09:54:43 -- accel/accel.sh@21 -- # val= 00:07:45.196 09:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.196 09:54:43 -- accel/accel.sh@21 -- # val= 00:07:45.196 09:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.196 09:54:43 -- accel/accel.sh@21 -- # val= 00:07:45.196 09:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.196 09:54:43 -- accel/accel.sh@21 -- # val= 00:07:45.196 09:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # IFS=: 00:07:45.196 09:54:43 -- accel/accel.sh@20 -- # read -r var val 00:07:45.196 09:54:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:45.196 09:54:43 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:45.196 09:54:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.196 00:07:45.196 real 0m2.870s 00:07:45.196 user 0m9.312s 00:07:45.196 sys 0m0.233s 00:07:45.196 09:54:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.196 09:54:43 -- common/autotest_common.sh@10 -- # set +x 00:07:45.196 ************************************ 00:07:45.197 END TEST accel_decomp_full_mcore 00:07:45.197 ************************************ 00:07:45.197 09:54:43 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:45.197 09:54:43 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:45.197 09:54:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.197 09:54:43 -- common/autotest_common.sh@10 -- # set +x 00:07:45.197 ************************************ 00:07:45.197 START TEST accel_decomp_mthread 00:07:45.197 ************************************ 00:07:45.197 09:54:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:45.197 09:54:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.197 09:54:43 -- accel/accel.sh@17 -- # local accel_module 00:07:45.197 09:54:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:45.197 09:54:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:45.197 09:54:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.197 09:54:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.197 09:54:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.197 09:54:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.197 09:54:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.197 09:54:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.197 09:54:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.197 09:54:43 -- accel/accel.sh@42 -- # jq -r . 00:07:45.197 [2024-12-16 09:54:43.634240] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.197 [2024-12-16 09:54:43.634333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71433 ] 00:07:45.197 [2024-12-16 09:54:43.772149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.455 [2024-12-16 09:54:43.825027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.832 09:54:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:46.832 00:07:46.832 SPDK Configuration: 00:07:46.832 Core mask: 0x1 00:07:46.832 00:07:46.832 Accel Perf Configuration: 00:07:46.832 Workload Type: decompress 00:07:46.832 Transfer size: 4096 bytes 00:07:46.832 Vector count 1 00:07:46.832 Module: software 00:07:46.832 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:46.832 Queue depth: 32 00:07:46.832 Allocate depth: 32 00:07:46.832 # threads/core: 2 00:07:46.832 Run time: 1 seconds 00:07:46.832 Verify: Yes 00:07:46.832 00:07:46.832 Running for 1 seconds... 00:07:46.832 00:07:46.832 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:46.832 ------------------------------------------------------------------------------------ 00:07:46.832 0,1 42912/s 79 MiB/s 0 0 00:07:46.832 0,0 42784/s 78 MiB/s 0 0 00:07:46.832 ==================================================================================== 00:07:46.832 Total 85696/s 334 MiB/s 0 0' 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:46.832 09:54:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.832 09:54:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.832 09:54:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.832 09:54:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.832 09:54:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.832 09:54:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.832 09:54:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.832 09:54:45 -- accel/accel.sh@42 -- # jq -r . 00:07:46.832 [2024-12-16 09:54:45.040035] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.832 [2024-12-16 09:54:45.040129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71453 ] 00:07:46.832 [2024-12-16 09:54:45.175408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.832 [2024-12-16 09:54:45.228172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val= 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val= 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val= 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val=0x1 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val= 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val= 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val=decompress 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val= 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val=software 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val=32 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val=32 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val=2 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val=Yes 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val= 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:46.832 09:54:45 -- accel/accel.sh@21 -- # val= 00:07:46.832 09:54:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # IFS=: 00:07:46.832 09:54:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.207 09:54:46 -- accel/accel.sh@21 -- # val= 00:07:48.207 09:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.207 09:54:46 -- accel/accel.sh@20 -- # IFS=: 00:07:48.207 09:54:46 -- accel/accel.sh@20 -- # read -r var val 00:07:48.207 09:54:46 -- accel/accel.sh@21 -- # val= 00:07:48.207 09:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.207 09:54:46 -- accel/accel.sh@20 -- # IFS=: 00:07:48.207 09:54:46 -- accel/accel.sh@20 -- # read -r var val 00:07:48.207 09:54:46 -- accel/accel.sh@21 -- # val= 00:07:48.207 09:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.207 09:54:46 -- accel/accel.sh@20 -- # IFS=: 00:07:48.207 09:54:46 -- accel/accel.sh@20 -- # read -r var val 00:07:48.207 09:54:46 -- accel/accel.sh@21 -- # val= 00:07:48.207 09:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.207 09:54:46 -- accel/accel.sh@20 -- # IFS=: 00:07:48.207 09:54:46 -- accel/accel.sh@20 -- # read -r var val 00:07:48.207 09:54:46 -- accel/accel.sh@21 -- # val= 00:07:48.207 09:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.207 09:54:46 -- accel/accel.sh@20 -- # IFS=: 00:07:48.207 09:54:46 -- accel/accel.sh@20 -- # read -r var val 00:07:48.207 09:54:46 -- accel/accel.sh@21 -- # val= 00:07:48.207 09:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.207 09:54:46 -- accel/accel.sh@20 -- # IFS=: 00:07:48.207 09:54:46 -- accel/accel.sh@20 -- # read -r var val 00:07:48.207 09:54:46 -- accel/accel.sh@21 -- # val= 00:07:48.207 09:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.207 09:54:46 -- accel/accel.sh@20 -- # IFS=: 00:07:48.207 09:54:46 -- accel/accel.sh@20 -- # read -r var val 00:07:48.207 09:54:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:48.207 09:54:46 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:48.207 09:54:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.207 00:07:48.207 real 0m2.815s 00:07:48.207 user 0m2.393s 00:07:48.207 sys 0m0.223s 00:07:48.207 09:54:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.207 ************************************ 00:07:48.207 END TEST accel_decomp_mthread 00:07:48.207 ************************************ 00:07:48.207 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:07:48.207 09:54:46 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.207 09:54:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:48.207 09:54:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.207 09:54:46 -- common/autotest_common.sh@10 -- # set +x 00:07:48.207 ************************************ 00:07:48.207 START TEST accel_deomp_full_mthread 00:07:48.207 ************************************ 00:07:48.207 09:54:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.207 09:54:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.207 09:54:46 -- accel/accel.sh@17 -- # local accel_module 00:07:48.207 09:54:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.207 09:54:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.207 09:54:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.207 09:54:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.207 09:54:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.207 09:54:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.207 09:54:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.207 09:54:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.207 09:54:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.207 09:54:46 -- accel/accel.sh@42 -- # jq -r . 00:07:48.207 [2024-12-16 09:54:46.492577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.207 [2024-12-16 09:54:46.492665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71487 ] 00:07:48.207 [2024-12-16 09:54:46.614646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.207 [2024-12-16 09:54:46.666697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.605 09:54:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:49.605 00:07:49.605 SPDK Configuration: 00:07:49.605 Core mask: 0x1 00:07:49.605 00:07:49.605 Accel Perf Configuration: 00:07:49.605 Workload Type: decompress 00:07:49.605 Transfer size: 111250 bytes 00:07:49.605 Vector count 1 00:07:49.605 Module: software 00:07:49.605 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.605 Queue depth: 32 00:07:49.605 Allocate depth: 32 00:07:49.605 # threads/core: 2 00:07:49.605 Run time: 1 seconds 00:07:49.605 Verify: Yes 00:07:49.605 00:07:49.605 Running for 1 seconds... 00:07:49.605 00:07:49.605 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:49.605 ------------------------------------------------------------------------------------ 00:07:49.605 0,1 2880/s 118 MiB/s 0 0 00:07:49.605 0,0 2848/s 117 MiB/s 0 0 00:07:49.605 ==================================================================================== 00:07:49.605 Total 5728/s 607 MiB/s 0 0' 00:07:49.605 09:54:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:49.605 09:54:47 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:47 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:49.605 09:54:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.605 09:54:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.605 09:54:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.605 09:54:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.605 09:54:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.605 09:54:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.605 09:54:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.605 09:54:47 -- accel/accel.sh@42 -- # jq -r . 00:07:49.605 [2024-12-16 09:54:47.897279] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.605 [2024-12-16 09:54:47.897416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71507 ] 00:07:49.605 [2024-12-16 09:54:48.028604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.605 [2024-12-16 09:54:48.080430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val= 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val= 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val= 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val=0x1 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val= 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val= 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val=decompress 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val= 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val=software 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@23 -- # accel_module=software 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val=32 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val=32 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val=2 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val=Yes 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.605 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.605 09:54:48 -- accel/accel.sh@21 -- # val= 00:07:49.605 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.606 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.606 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:49.606 09:54:48 -- accel/accel.sh@21 -- # val= 00:07:49.606 09:54:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.606 09:54:48 -- accel/accel.sh@20 -- # IFS=: 00:07:49.606 09:54:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.010 09:54:49 -- accel/accel.sh@21 -- # val= 00:07:51.010 09:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.010 09:54:49 -- accel/accel.sh@20 -- # IFS=: 00:07:51.010 09:54:49 -- accel/accel.sh@20 -- # read -r var val 00:07:51.010 09:54:49 -- accel/accel.sh@21 -- # val= 00:07:51.010 09:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.010 09:54:49 -- accel/accel.sh@20 -- # IFS=: 00:07:51.010 09:54:49 -- accel/accel.sh@20 -- # read -r var val 00:07:51.010 09:54:49 -- accel/accel.sh@21 -- # val= 00:07:51.010 09:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.010 09:54:49 -- accel/accel.sh@20 -- # IFS=: 00:07:51.010 09:54:49 -- accel/accel.sh@20 -- # read -r var val 00:07:51.010 09:54:49 -- accel/accel.sh@21 -- # val= 00:07:51.010 09:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.010 09:54:49 -- accel/accel.sh@20 -- # IFS=: 00:07:51.010 09:54:49 -- accel/accel.sh@20 -- # read -r var val 00:07:51.010 09:54:49 -- accel/accel.sh@21 -- # val= 00:07:51.010 09:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.010 09:54:49 -- accel/accel.sh@20 -- # IFS=: 00:07:51.010 09:54:49 -- accel/accel.sh@20 -- # read -r var val 00:07:51.010 09:54:49 -- accel/accel.sh@21 -- # val= 00:07:51.010 09:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.010 09:54:49 -- accel/accel.sh@20 -- # IFS=: 00:07:51.010 09:54:49 -- accel/accel.sh@20 -- # read -r var val 00:07:51.010 09:54:49 -- accel/accel.sh@21 -- # val= 00:07:51.010 09:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.010 09:54:49 -- accel/accel.sh@20 -- # IFS=: 00:07:51.010 09:54:49 -- accel/accel.sh@20 -- # read -r var val 00:07:51.010 09:54:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:51.010 09:54:49 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:51.010 09:54:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.010 00:07:51.010 real 0m2.823s 00:07:51.010 user 0m2.428s 00:07:51.010 sys 0m0.196s 00:07:51.010 09:54:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.010 ************************************ 00:07:51.010 END TEST accel_deomp_full_mthread 00:07:51.010 ************************************ 00:07:51.010 09:54:49 -- common/autotest_common.sh@10 -- # set +x 00:07:51.010 09:54:49 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:51.010 09:54:49 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:51.010 09:54:49 -- accel/accel.sh@129 -- # build_accel_config 00:07:51.010 09:54:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.010 09:54:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:51.010 09:54:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.010 09:54:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.010 09:54:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.010 09:54:49 -- common/autotest_common.sh@10 -- # set +x 00:07:51.010 09:54:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.010 09:54:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.010 09:54:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.010 09:54:49 -- accel/accel.sh@42 -- # jq -r . 00:07:51.010 ************************************ 00:07:51.010 START TEST accel_dif_functional_tests 00:07:51.010 ************************************ 00:07:51.010 09:54:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:51.010 [2024-12-16 09:54:49.406163] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.010 [2024-12-16 09:54:49.406262] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71542 ] 00:07:51.010 [2024-12-16 09:54:49.544797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.010 [2024-12-16 09:54:49.599307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.010 [2024-12-16 09:54:49.599437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.010 [2024-12-16 09:54:49.599441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.307 00:07:51.307 00:07:51.307 CUnit - A unit testing framework for C - Version 2.1-3 00:07:51.307 http://cunit.sourceforge.net/ 00:07:51.307 00:07:51.307 00:07:51.307 Suite: accel_dif 00:07:51.307 Test: verify: DIF generated, GUARD check ...passed 00:07:51.307 Test: verify: DIF generated, APPTAG check ...passed 00:07:51.307 Test: verify: DIF generated, REFTAG check ...passed 00:07:51.307 Test: verify: DIF not generated, GUARD check ...passed 00:07:51.307 Test: verify: DIF not generated, APPTAG check ...[2024-12-16 09:54:49.684059] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:51.307 [2024-12-16 09:54:49.684154] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:51.307 [2024-12-16 09:54:49.684188] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:51.307 passed 00:07:51.307 Test: verify: DIF not generated, REFTAG check ...[2024-12-16 09:54:49.684576] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:51.307 passed 00:07:51.307 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:51.307 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:51.307 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:51.307 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:51.307 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:51.307 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:51.307 Test: generate copy: DIF generated, GUARD check ...[2024-12-16 09:54:49.684614] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:51.307 [2024-12-16 09:54:49.684643] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:51.307 [2024-12-16 09:54:49.684711] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:51.307 [2024-12-16 09:54:49.684882] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:51.307 passed 00:07:51.307 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:51.307 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:51.307 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:51.307 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:51.307 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:51.307 Test: generate copy: iovecs-len validate ...passed 00:07:51.307 Test: generate copy: buffer alignment validate ...passed 00:07:51.307 00:07:51.307 Run Summary: Type Total Ran Passed Failed Inactive 00:07:51.307 suites 1 1 n/a 0 0 00:07:51.307 tests 20 20 20 0 0 00:07:51.307 asserts 204 204 204 0 n/a 00:07:51.307 00:07:51.307 Elapsed time = 0.004 seconds 00:07:51.308 [2024-12-16 09:54:49.685321] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:51.308 00:07:51.308 real 0m0.526s 00:07:51.308 user 0m0.703s 00:07:51.308 sys 0m0.150s 00:07:51.308 09:54:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.308 ************************************ 00:07:51.308 09:54:49 -- common/autotest_common.sh@10 -- # set +x 00:07:51.308 END TEST accel_dif_functional_tests 00:07:51.308 ************************************ 00:07:51.567 00:07:51.567 real 1m0.791s 00:07:51.567 user 1m5.208s 00:07:51.567 sys 0m5.944s 00:07:51.567 09:54:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.567 09:54:49 -- common/autotest_common.sh@10 -- # set +x 00:07:51.567 ************************************ 00:07:51.567 END TEST accel 00:07:51.567 ************************************ 00:07:51.567 09:54:49 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:51.567 09:54:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:51.567 09:54:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.567 09:54:49 -- common/autotest_common.sh@10 -- # set +x 00:07:51.567 ************************************ 00:07:51.567 START TEST accel_rpc 00:07:51.567 ************************************ 00:07:51.567 09:54:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:51.567 * Looking for test storage... 00:07:51.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:51.567 09:54:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:51.567 09:54:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:51.567 09:54:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:51.567 09:54:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:51.567 09:54:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:51.567 09:54:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:51.567 09:54:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:51.567 09:54:50 -- scripts/common.sh@335 -- # IFS=.-: 00:07:51.567 09:54:50 -- scripts/common.sh@335 -- # read -ra ver1 00:07:51.567 09:54:50 -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.567 09:54:50 -- scripts/common.sh@336 -- # read -ra ver2 00:07:51.567 09:54:50 -- scripts/common.sh@337 -- # local 'op=<' 00:07:51.567 09:54:50 -- scripts/common.sh@339 -- # ver1_l=2 00:07:51.567 09:54:50 -- scripts/common.sh@340 -- # ver2_l=1 00:07:51.567 09:54:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:51.567 09:54:50 -- scripts/common.sh@343 -- # case "$op" in 00:07:51.567 09:54:50 -- scripts/common.sh@344 -- # : 1 00:07:51.567 09:54:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:51.567 09:54:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.567 09:54:50 -- scripts/common.sh@364 -- # decimal 1 00:07:51.567 09:54:50 -- scripts/common.sh@352 -- # local d=1 00:07:51.567 09:54:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.567 09:54:50 -- scripts/common.sh@354 -- # echo 1 00:07:51.567 09:54:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:51.567 09:54:50 -- scripts/common.sh@365 -- # decimal 2 00:07:51.567 09:54:50 -- scripts/common.sh@352 -- # local d=2 00:07:51.567 09:54:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.567 09:54:50 -- scripts/common.sh@354 -- # echo 2 00:07:51.567 09:54:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:51.567 09:54:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:51.567 09:54:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:51.567 09:54:50 -- scripts/common.sh@367 -- # return 0 00:07:51.567 09:54:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.567 09:54:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:51.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.567 --rc genhtml_branch_coverage=1 00:07:51.567 --rc genhtml_function_coverage=1 00:07:51.567 --rc genhtml_legend=1 00:07:51.567 --rc geninfo_all_blocks=1 00:07:51.567 --rc geninfo_unexecuted_blocks=1 00:07:51.567 00:07:51.567 ' 00:07:51.567 09:54:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:51.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.567 --rc genhtml_branch_coverage=1 00:07:51.567 --rc genhtml_function_coverage=1 00:07:51.567 --rc genhtml_legend=1 00:07:51.567 --rc geninfo_all_blocks=1 00:07:51.567 --rc geninfo_unexecuted_blocks=1 00:07:51.567 00:07:51.567 ' 00:07:51.567 09:54:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:51.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.567 --rc genhtml_branch_coverage=1 00:07:51.567 --rc genhtml_function_coverage=1 00:07:51.567 --rc genhtml_legend=1 00:07:51.567 --rc geninfo_all_blocks=1 00:07:51.567 --rc geninfo_unexecuted_blocks=1 00:07:51.567 00:07:51.567 ' 00:07:51.567 09:54:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:51.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.567 --rc genhtml_branch_coverage=1 00:07:51.567 --rc genhtml_function_coverage=1 00:07:51.567 --rc genhtml_legend=1 00:07:51.567 --rc geninfo_all_blocks=1 00:07:51.567 --rc geninfo_unexecuted_blocks=1 00:07:51.567 00:07:51.567 ' 00:07:51.567 09:54:50 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:51.567 09:54:50 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71614 00:07:51.567 09:54:50 -- accel/accel_rpc.sh@15 -- # waitforlisten 71614 00:07:51.567 09:54:50 -- common/autotest_common.sh@829 -- # '[' -z 71614 ']' 00:07:51.567 09:54:50 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:51.567 09:54:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.567 09:54:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.567 09:54:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.567 09:54:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.567 09:54:50 -- common/autotest_common.sh@10 -- # set +x 00:07:51.825 [2024-12-16 09:54:50.210264] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.825 [2024-12-16 09:54:50.210423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71614 ] 00:07:51.825 [2024-12-16 09:54:50.349483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.825 [2024-12-16 09:54:50.425418] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:51.825 [2024-12-16 09:54:50.425613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.760 09:54:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.760 09:54:51 -- common/autotest_common.sh@862 -- # return 0 00:07:52.760 09:54:51 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:52.760 09:54:51 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:52.760 09:54:51 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:52.760 09:54:51 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:52.760 09:54:51 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:52.760 09:54:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:52.760 09:54:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.760 09:54:51 -- common/autotest_common.sh@10 -- # set +x 00:07:52.760 ************************************ 00:07:52.760 START TEST accel_assign_opcode 00:07:52.760 ************************************ 00:07:52.760 09:54:51 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:52.760 09:54:51 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:52.760 09:54:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.761 09:54:51 -- common/autotest_common.sh@10 -- # set +x 00:07:52.761 [2024-12-16 09:54:51.230122] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:52.761 09:54:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.761 09:54:51 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:52.761 09:54:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.761 09:54:51 -- common/autotest_common.sh@10 -- # set +x 00:07:52.761 [2024-12-16 09:54:51.238100] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:52.761 09:54:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.761 09:54:51 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:52.761 09:54:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.761 09:54:51 -- common/autotest_common.sh@10 -- # set +x 00:07:53.019 09:54:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.019 09:54:51 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:53.019 09:54:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.019 09:54:51 -- common/autotest_common.sh@10 -- # set +x 00:07:53.019 09:54:51 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:53.019 09:54:51 -- accel/accel_rpc.sh@42 -- # grep software 00:07:53.019 09:54:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.019 software 00:07:53.019 00:07:53.019 real 0m0.288s 00:07:53.019 user 0m0.053s 00:07:53.019 sys 0m0.012s 00:07:53.019 09:54:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.019 ************************************ 00:07:53.019 END TEST accel_assign_opcode 00:07:53.019 ************************************ 00:07:53.019 09:54:51 -- common/autotest_common.sh@10 -- # set +x 00:07:53.019 09:54:51 -- accel/accel_rpc.sh@55 -- # killprocess 71614 00:07:53.019 09:54:51 -- common/autotest_common.sh@936 -- # '[' -z 71614 ']' 00:07:53.019 09:54:51 -- common/autotest_common.sh@940 -- # kill -0 71614 00:07:53.019 09:54:51 -- common/autotest_common.sh@941 -- # uname 00:07:53.019 09:54:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:53.019 09:54:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71614 00:07:53.019 09:54:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:53.019 09:54:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:53.019 09:54:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71614' 00:07:53.019 killing process with pid 71614 00:07:53.019 09:54:51 -- common/autotest_common.sh@955 -- # kill 71614 00:07:53.019 09:54:51 -- common/autotest_common.sh@960 -- # wait 71614 00:07:53.585 00:07:53.585 real 0m1.993s 00:07:53.585 user 0m2.102s 00:07:53.585 sys 0m0.489s 00:07:53.585 09:54:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.585 09:54:51 -- common/autotest_common.sh@10 -- # set +x 00:07:53.585 ************************************ 00:07:53.585 END TEST accel_rpc 00:07:53.585 ************************************ 00:07:53.585 09:54:52 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:53.585 09:54:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:53.585 09:54:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.585 09:54:52 -- common/autotest_common.sh@10 -- # set +x 00:07:53.585 ************************************ 00:07:53.585 START TEST app_cmdline 00:07:53.585 ************************************ 00:07:53.585 09:54:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:53.585 * Looking for test storage... 00:07:53.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:53.585 09:54:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:53.586 09:54:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:53.586 09:54:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:53.586 09:54:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:53.586 09:54:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:53.586 09:54:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:53.586 09:54:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:53.586 09:54:52 -- scripts/common.sh@335 -- # IFS=.-: 00:07:53.586 09:54:52 -- scripts/common.sh@335 -- # read -ra ver1 00:07:53.586 09:54:52 -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.586 09:54:52 -- scripts/common.sh@336 -- # read -ra ver2 00:07:53.586 09:54:52 -- scripts/common.sh@337 -- # local 'op=<' 00:07:53.586 09:54:52 -- scripts/common.sh@339 -- # ver1_l=2 00:07:53.586 09:54:52 -- scripts/common.sh@340 -- # ver2_l=1 00:07:53.586 09:54:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:53.586 09:54:52 -- scripts/common.sh@343 -- # case "$op" in 00:07:53.586 09:54:52 -- scripts/common.sh@344 -- # : 1 00:07:53.586 09:54:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:53.586 09:54:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.586 09:54:52 -- scripts/common.sh@364 -- # decimal 1 00:07:53.586 09:54:52 -- scripts/common.sh@352 -- # local d=1 00:07:53.586 09:54:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.586 09:54:52 -- scripts/common.sh@354 -- # echo 1 00:07:53.586 09:54:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:53.586 09:54:52 -- scripts/common.sh@365 -- # decimal 2 00:07:53.586 09:54:52 -- scripts/common.sh@352 -- # local d=2 00:07:53.586 09:54:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.586 09:54:52 -- scripts/common.sh@354 -- # echo 2 00:07:53.586 09:54:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:53.586 09:54:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:53.586 09:54:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:53.586 09:54:52 -- scripts/common.sh@367 -- # return 0 00:07:53.586 09:54:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.586 09:54:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:53.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.586 --rc genhtml_branch_coverage=1 00:07:53.586 --rc genhtml_function_coverage=1 00:07:53.586 --rc genhtml_legend=1 00:07:53.586 --rc geninfo_all_blocks=1 00:07:53.586 --rc geninfo_unexecuted_blocks=1 00:07:53.586 00:07:53.586 ' 00:07:53.586 09:54:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:53.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.586 --rc genhtml_branch_coverage=1 00:07:53.586 --rc genhtml_function_coverage=1 00:07:53.586 --rc genhtml_legend=1 00:07:53.586 --rc geninfo_all_blocks=1 00:07:53.586 --rc geninfo_unexecuted_blocks=1 00:07:53.586 00:07:53.586 ' 00:07:53.586 09:54:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:53.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.586 --rc genhtml_branch_coverage=1 00:07:53.586 --rc genhtml_function_coverage=1 00:07:53.586 --rc genhtml_legend=1 00:07:53.586 --rc geninfo_all_blocks=1 00:07:53.586 --rc geninfo_unexecuted_blocks=1 00:07:53.586 00:07:53.586 ' 00:07:53.586 09:54:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:53.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.586 --rc genhtml_branch_coverage=1 00:07:53.586 --rc genhtml_function_coverage=1 00:07:53.586 --rc genhtml_legend=1 00:07:53.586 --rc geninfo_all_blocks=1 00:07:53.586 --rc geninfo_unexecuted_blocks=1 00:07:53.586 00:07:53.586 ' 00:07:53.586 09:54:52 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:53.586 09:54:52 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71732 00:07:53.586 09:54:52 -- app/cmdline.sh@18 -- # waitforlisten 71732 00:07:53.586 09:54:52 -- common/autotest_common.sh@829 -- # '[' -z 71732 ']' 00:07:53.586 09:54:52 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:53.586 09:54:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.586 09:54:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.586 09:54:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.586 09:54:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.586 09:54:52 -- common/autotest_common.sh@10 -- # set +x 00:07:53.845 [2024-12-16 09:54:52.257269] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.845 [2024-12-16 09:54:52.257407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71732 ] 00:07:53.845 [2024-12-16 09:54:52.396775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.845 [2024-12-16 09:54:52.460263] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:53.845 [2024-12-16 09:54:52.460461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.780 09:54:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.780 09:54:53 -- common/autotest_common.sh@862 -- # return 0 00:07:54.780 09:54:53 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:55.039 { 00:07:55.039 "fields": { 00:07:55.039 "commit": "c13c99a5e", 00:07:55.039 "major": 24, 00:07:55.039 "minor": 1, 00:07:55.039 "patch": 1, 00:07:55.039 "suffix": "-pre" 00:07:55.039 }, 00:07:55.039 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:07:55.039 } 00:07:55.039 09:54:53 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:55.039 09:54:53 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:55.039 09:54:53 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:55.039 09:54:53 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:55.039 09:54:53 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:55.039 09:54:53 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:55.039 09:54:53 -- app/cmdline.sh@26 -- # sort 00:07:55.039 09:54:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.039 09:54:53 -- common/autotest_common.sh@10 -- # set +x 00:07:55.039 09:54:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.039 09:54:53 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:55.039 09:54:53 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:55.039 09:54:53 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.039 09:54:53 -- common/autotest_common.sh@650 -- # local es=0 00:07:55.039 09:54:53 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.039 09:54:53 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.039 09:54:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.039 09:54:53 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.039 09:54:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.039 09:54:53 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.039 09:54:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.039 09:54:53 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.039 09:54:53 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:55.039 09:54:53 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.298 2024/12/16 09:54:53 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:55.298 request: 00:07:55.298 { 00:07:55.298 "method": "env_dpdk_get_mem_stats", 00:07:55.298 "params": {} 00:07:55.298 } 00:07:55.298 Got JSON-RPC error response 00:07:55.298 GoRPCClient: error on JSON-RPC call 00:07:55.298 09:54:53 -- common/autotest_common.sh@653 -- # es=1 00:07:55.298 09:54:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:55.298 09:54:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:55.298 09:54:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:55.298 09:54:53 -- app/cmdline.sh@1 -- # killprocess 71732 00:07:55.298 09:54:53 -- common/autotest_common.sh@936 -- # '[' -z 71732 ']' 00:07:55.298 09:54:53 -- common/autotest_common.sh@940 -- # kill -0 71732 00:07:55.298 09:54:53 -- common/autotest_common.sh@941 -- # uname 00:07:55.298 09:54:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:55.298 09:54:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71732 00:07:55.298 09:54:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:55.298 killing process with pid 71732 00:07:55.298 09:54:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:55.298 09:54:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71732' 00:07:55.298 09:54:53 -- common/autotest_common.sh@955 -- # kill 71732 00:07:55.298 09:54:53 -- common/autotest_common.sh@960 -- # wait 71732 00:07:55.865 00:07:55.865 real 0m2.228s 00:07:55.865 user 0m2.793s 00:07:55.865 sys 0m0.515s 00:07:55.865 ************************************ 00:07:55.865 END TEST app_cmdline 00:07:55.865 ************************************ 00:07:55.865 09:54:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:55.865 09:54:54 -- common/autotest_common.sh@10 -- # set +x 00:07:55.865 09:54:54 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:55.865 09:54:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:55.865 09:54:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.865 09:54:54 -- common/autotest_common.sh@10 -- # set +x 00:07:55.865 ************************************ 00:07:55.865 START TEST version 00:07:55.865 ************************************ 00:07:55.865 09:54:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:55.865 * Looking for test storage... 00:07:55.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:55.865 09:54:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:55.865 09:54:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:55.865 09:54:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:55.865 09:54:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:55.865 09:54:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:55.865 09:54:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:55.865 09:54:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:55.865 09:54:54 -- scripts/common.sh@335 -- # IFS=.-: 00:07:55.865 09:54:54 -- scripts/common.sh@335 -- # read -ra ver1 00:07:55.865 09:54:54 -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.865 09:54:54 -- scripts/common.sh@336 -- # read -ra ver2 00:07:55.865 09:54:54 -- scripts/common.sh@337 -- # local 'op=<' 00:07:55.865 09:54:54 -- scripts/common.sh@339 -- # ver1_l=2 00:07:55.865 09:54:54 -- scripts/common.sh@340 -- # ver2_l=1 00:07:55.865 09:54:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:55.865 09:54:54 -- scripts/common.sh@343 -- # case "$op" in 00:07:55.865 09:54:54 -- scripts/common.sh@344 -- # : 1 00:07:55.865 09:54:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:55.865 09:54:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.865 09:54:54 -- scripts/common.sh@364 -- # decimal 1 00:07:55.865 09:54:54 -- scripts/common.sh@352 -- # local d=1 00:07:55.865 09:54:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.865 09:54:54 -- scripts/common.sh@354 -- # echo 1 00:07:55.865 09:54:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:55.865 09:54:54 -- scripts/common.sh@365 -- # decimal 2 00:07:55.865 09:54:54 -- scripts/common.sh@352 -- # local d=2 00:07:55.865 09:54:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.865 09:54:54 -- scripts/common.sh@354 -- # echo 2 00:07:55.865 09:54:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:55.865 09:54:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:55.865 09:54:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:55.865 09:54:54 -- scripts/common.sh@367 -- # return 0 00:07:55.865 09:54:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.865 09:54:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:55.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.865 --rc genhtml_branch_coverage=1 00:07:55.865 --rc genhtml_function_coverage=1 00:07:55.865 --rc genhtml_legend=1 00:07:55.865 --rc geninfo_all_blocks=1 00:07:55.865 --rc geninfo_unexecuted_blocks=1 00:07:55.865 00:07:55.865 ' 00:07:55.865 09:54:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:55.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.865 --rc genhtml_branch_coverage=1 00:07:55.865 --rc genhtml_function_coverage=1 00:07:55.865 --rc genhtml_legend=1 00:07:55.865 --rc geninfo_all_blocks=1 00:07:55.865 --rc geninfo_unexecuted_blocks=1 00:07:55.865 00:07:55.865 ' 00:07:55.865 09:54:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:55.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.865 --rc genhtml_branch_coverage=1 00:07:55.865 --rc genhtml_function_coverage=1 00:07:55.865 --rc genhtml_legend=1 00:07:55.865 --rc geninfo_all_blocks=1 00:07:55.865 --rc geninfo_unexecuted_blocks=1 00:07:55.865 00:07:55.865 ' 00:07:55.865 09:54:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:55.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.865 --rc genhtml_branch_coverage=1 00:07:55.866 --rc genhtml_function_coverage=1 00:07:55.866 --rc genhtml_legend=1 00:07:55.866 --rc geninfo_all_blocks=1 00:07:55.866 --rc geninfo_unexecuted_blocks=1 00:07:55.866 00:07:55.866 ' 00:07:55.866 09:54:54 -- app/version.sh@17 -- # get_header_version major 00:07:55.866 09:54:54 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.866 09:54:54 -- app/version.sh@14 -- # cut -f2 00:07:55.866 09:54:54 -- app/version.sh@14 -- # tr -d '"' 00:07:55.866 09:54:54 -- app/version.sh@17 -- # major=24 00:07:55.866 09:54:54 -- app/version.sh@18 -- # get_header_version minor 00:07:55.866 09:54:54 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.866 09:54:54 -- app/version.sh@14 -- # cut -f2 00:07:55.866 09:54:54 -- app/version.sh@14 -- # tr -d '"' 00:07:55.866 09:54:54 -- app/version.sh@18 -- # minor=1 00:07:55.866 09:54:54 -- app/version.sh@19 -- # get_header_version patch 00:07:55.866 09:54:54 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:55.866 09:54:54 -- app/version.sh@14 -- # tr -d '"' 00:07:55.866 09:54:54 -- app/version.sh@14 -- # cut -f2 00:07:55.866 09:54:54 -- app/version.sh@19 -- # patch=1 00:07:55.866 09:54:54 -- app/version.sh@20 -- # get_header_version suffix 00:07:56.125 09:54:54 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:56.125 09:54:54 -- app/version.sh@14 -- # tr -d '"' 00:07:56.125 09:54:54 -- app/version.sh@14 -- # cut -f2 00:07:56.125 09:54:54 -- app/version.sh@20 -- # suffix=-pre 00:07:56.125 09:54:54 -- app/version.sh@22 -- # version=24.1 00:07:56.125 09:54:54 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:56.125 09:54:54 -- app/version.sh@25 -- # version=24.1.1 00:07:56.125 09:54:54 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:56.125 09:54:54 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:56.125 09:54:54 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:56.125 09:54:54 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:56.125 09:54:54 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:56.125 00:07:56.125 real 0m0.235s 00:07:56.125 user 0m0.171s 00:07:56.125 sys 0m0.106s 00:07:56.125 09:54:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.125 09:54:54 -- common/autotest_common.sh@10 -- # set +x 00:07:56.125 ************************************ 00:07:56.125 END TEST version 00:07:56.125 ************************************ 00:07:56.125 09:54:54 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:56.125 09:54:54 -- spdk/autotest.sh@191 -- # uname -s 00:07:56.125 09:54:54 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:56.125 09:54:54 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:56.125 09:54:54 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:56.125 09:54:54 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:56.125 09:54:54 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:56.125 09:54:54 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:56.125 09:54:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.125 09:54:54 -- common/autotest_common.sh@10 -- # set +x 00:07:56.125 09:54:54 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:56.125 09:54:54 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:56.125 09:54:54 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:56.125 09:54:54 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:56.125 09:54:54 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:56.125 09:54:54 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:56.125 09:54:54 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:56.125 09:54:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:56.125 09:54:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.125 09:54:54 -- common/autotest_common.sh@10 -- # set +x 00:07:56.125 ************************************ 00:07:56.125 START TEST nvmf_tcp 00:07:56.125 ************************************ 00:07:56.125 09:54:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:56.125 * Looking for test storage... 00:07:56.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:56.125 09:54:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:56.125 09:54:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:56.125 09:54:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:56.384 09:54:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:56.384 09:54:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:56.384 09:54:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:56.384 09:54:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:56.384 09:54:54 -- scripts/common.sh@335 -- # IFS=.-: 00:07:56.384 09:54:54 -- scripts/common.sh@335 -- # read -ra ver1 00:07:56.384 09:54:54 -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.384 09:54:54 -- scripts/common.sh@336 -- # read -ra ver2 00:07:56.384 09:54:54 -- scripts/common.sh@337 -- # local 'op=<' 00:07:56.384 09:54:54 -- scripts/common.sh@339 -- # ver1_l=2 00:07:56.384 09:54:54 -- scripts/common.sh@340 -- # ver2_l=1 00:07:56.384 09:54:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:56.384 09:54:54 -- scripts/common.sh@343 -- # case "$op" in 00:07:56.384 09:54:54 -- scripts/common.sh@344 -- # : 1 00:07:56.384 09:54:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:56.384 09:54:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.384 09:54:54 -- scripts/common.sh@364 -- # decimal 1 00:07:56.384 09:54:54 -- scripts/common.sh@352 -- # local d=1 00:07:56.384 09:54:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.384 09:54:54 -- scripts/common.sh@354 -- # echo 1 00:07:56.384 09:54:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:56.384 09:54:54 -- scripts/common.sh@365 -- # decimal 2 00:07:56.384 09:54:54 -- scripts/common.sh@352 -- # local d=2 00:07:56.384 09:54:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.384 09:54:54 -- scripts/common.sh@354 -- # echo 2 00:07:56.384 09:54:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:56.384 09:54:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:56.384 09:54:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:56.384 09:54:54 -- scripts/common.sh@367 -- # return 0 00:07:56.384 09:54:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.384 09:54:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:56.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.384 --rc genhtml_branch_coverage=1 00:07:56.384 --rc genhtml_function_coverage=1 00:07:56.384 --rc genhtml_legend=1 00:07:56.384 --rc geninfo_all_blocks=1 00:07:56.384 --rc geninfo_unexecuted_blocks=1 00:07:56.384 00:07:56.384 ' 00:07:56.384 09:54:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:56.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.384 --rc genhtml_branch_coverage=1 00:07:56.384 --rc genhtml_function_coverage=1 00:07:56.384 --rc genhtml_legend=1 00:07:56.384 --rc geninfo_all_blocks=1 00:07:56.384 --rc geninfo_unexecuted_blocks=1 00:07:56.384 00:07:56.384 ' 00:07:56.384 09:54:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:56.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.384 --rc genhtml_branch_coverage=1 00:07:56.384 --rc genhtml_function_coverage=1 00:07:56.384 --rc genhtml_legend=1 00:07:56.384 --rc geninfo_all_blocks=1 00:07:56.384 --rc geninfo_unexecuted_blocks=1 00:07:56.384 00:07:56.384 ' 00:07:56.384 09:54:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:56.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.384 --rc genhtml_branch_coverage=1 00:07:56.384 --rc genhtml_function_coverage=1 00:07:56.384 --rc genhtml_legend=1 00:07:56.384 --rc geninfo_all_blocks=1 00:07:56.384 --rc geninfo_unexecuted_blocks=1 00:07:56.384 00:07:56.384 ' 00:07:56.384 09:54:54 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:56.384 09:54:54 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:56.384 09:54:54 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:56.384 09:54:54 -- nvmf/common.sh@7 -- # uname -s 00:07:56.384 09:54:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.384 09:54:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.384 09:54:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.384 09:54:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.384 09:54:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.384 09:54:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.384 09:54:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.384 09:54:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.384 09:54:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.384 09:54:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.384 09:54:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:07:56.384 09:54:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:07:56.384 09:54:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.384 09:54:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.384 09:54:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:56.384 09:54:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.384 09:54:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.384 09:54:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.384 09:54:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.385 09:54:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.385 09:54:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.385 09:54:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.385 09:54:54 -- paths/export.sh@5 -- # export PATH 00:07:56.385 09:54:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.385 09:54:54 -- nvmf/common.sh@46 -- # : 0 00:07:56.385 09:54:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:56.385 09:54:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:56.385 09:54:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:56.385 09:54:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.385 09:54:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.385 09:54:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:56.385 09:54:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:56.385 09:54:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:56.385 09:54:54 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:56.385 09:54:54 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:56.385 09:54:54 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:56.385 09:54:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.385 09:54:54 -- common/autotest_common.sh@10 -- # set +x 00:07:56.385 09:54:54 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:56.385 09:54:54 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:56.385 09:54:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:56.385 09:54:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.385 09:54:54 -- common/autotest_common.sh@10 -- # set +x 00:07:56.385 ************************************ 00:07:56.385 START TEST nvmf_example 00:07:56.385 ************************************ 00:07:56.385 09:54:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:56.385 * Looking for test storage... 00:07:56.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:56.385 09:54:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:56.385 09:54:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:56.385 09:54:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:56.385 09:54:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:56.385 09:54:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:56.385 09:54:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:56.644 09:54:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:56.644 09:54:55 -- scripts/common.sh@335 -- # IFS=.-: 00:07:56.644 09:54:55 -- scripts/common.sh@335 -- # read -ra ver1 00:07:56.644 09:54:55 -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.644 09:54:55 -- scripts/common.sh@336 -- # read -ra ver2 00:07:56.644 09:54:55 -- scripts/common.sh@337 -- # local 'op=<' 00:07:56.644 09:54:55 -- scripts/common.sh@339 -- # ver1_l=2 00:07:56.644 09:54:55 -- scripts/common.sh@340 -- # ver2_l=1 00:07:56.644 09:54:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:56.644 09:54:55 -- scripts/common.sh@343 -- # case "$op" in 00:07:56.644 09:54:55 -- scripts/common.sh@344 -- # : 1 00:07:56.644 09:54:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:56.644 09:54:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.644 09:54:55 -- scripts/common.sh@364 -- # decimal 1 00:07:56.644 09:54:55 -- scripts/common.sh@352 -- # local d=1 00:07:56.644 09:54:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.644 09:54:55 -- scripts/common.sh@354 -- # echo 1 00:07:56.644 09:54:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:56.644 09:54:55 -- scripts/common.sh@365 -- # decimal 2 00:07:56.644 09:54:55 -- scripts/common.sh@352 -- # local d=2 00:07:56.644 09:54:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.644 09:54:55 -- scripts/common.sh@354 -- # echo 2 00:07:56.644 09:54:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:56.644 09:54:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:56.644 09:54:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:56.644 09:54:55 -- scripts/common.sh@367 -- # return 0 00:07:56.644 09:54:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.644 09:54:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:56.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.644 --rc genhtml_branch_coverage=1 00:07:56.644 --rc genhtml_function_coverage=1 00:07:56.644 --rc genhtml_legend=1 00:07:56.644 --rc geninfo_all_blocks=1 00:07:56.644 --rc geninfo_unexecuted_blocks=1 00:07:56.644 00:07:56.644 ' 00:07:56.644 09:54:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:56.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.644 --rc genhtml_branch_coverage=1 00:07:56.644 --rc genhtml_function_coverage=1 00:07:56.644 --rc genhtml_legend=1 00:07:56.644 --rc geninfo_all_blocks=1 00:07:56.644 --rc geninfo_unexecuted_blocks=1 00:07:56.644 00:07:56.644 ' 00:07:56.644 09:54:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:56.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.644 --rc genhtml_branch_coverage=1 00:07:56.644 --rc genhtml_function_coverage=1 00:07:56.644 --rc genhtml_legend=1 00:07:56.644 --rc geninfo_all_blocks=1 00:07:56.644 --rc geninfo_unexecuted_blocks=1 00:07:56.644 00:07:56.644 ' 00:07:56.644 09:54:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:56.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.644 --rc genhtml_branch_coverage=1 00:07:56.644 --rc genhtml_function_coverage=1 00:07:56.644 --rc genhtml_legend=1 00:07:56.644 --rc geninfo_all_blocks=1 00:07:56.644 --rc geninfo_unexecuted_blocks=1 00:07:56.644 00:07:56.644 ' 00:07:56.644 09:54:55 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:56.644 09:54:55 -- nvmf/common.sh@7 -- # uname -s 00:07:56.644 09:54:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.644 09:54:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.644 09:54:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.644 09:54:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.644 09:54:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.644 09:54:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.644 09:54:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.644 09:54:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.644 09:54:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.644 09:54:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.644 09:54:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:07:56.644 09:54:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:07:56.644 09:54:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.644 09:54:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.644 09:54:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:56.644 09:54:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.644 09:54:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.644 09:54:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.644 09:54:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.644 09:54:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.644 09:54:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.644 09:54:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.644 09:54:55 -- paths/export.sh@5 -- # export PATH 00:07:56.644 09:54:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.644 09:54:55 -- nvmf/common.sh@46 -- # : 0 00:07:56.644 09:54:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:56.644 09:54:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:56.644 09:54:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:56.644 09:54:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.644 09:54:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.644 09:54:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:56.644 09:54:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:56.644 09:54:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:56.644 09:54:55 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:56.644 09:54:55 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:56.644 09:54:55 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:56.644 09:54:55 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:56.644 09:54:55 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:56.644 09:54:55 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:56.644 09:54:55 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:56.644 09:54:55 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:56.644 09:54:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.644 09:54:55 -- common/autotest_common.sh@10 -- # set +x 00:07:56.644 09:54:55 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:56.644 09:54:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:56.644 09:54:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.644 09:54:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:56.644 09:54:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:56.644 09:54:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:56.644 09:54:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.644 09:54:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.644 09:54:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.644 09:54:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:56.644 09:54:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:56.644 09:54:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:56.644 09:54:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:56.644 09:54:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:56.644 09:54:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:56.644 09:54:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.644 09:54:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.644 09:54:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:56.644 09:54:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:56.644 09:54:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:56.644 09:54:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:56.644 09:54:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:56.644 09:54:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.645 09:54:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:56.645 09:54:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:56.645 09:54:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:56.645 09:54:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:56.645 09:54:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:56.645 Cannot find device "nvmf_init_br" 00:07:56.645 09:54:55 -- nvmf/common.sh@153 -- # true 00:07:56.645 09:54:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:56.645 Cannot find device "nvmf_tgt_br" 00:07:56.645 09:54:55 -- nvmf/common.sh@154 -- # true 00:07:56.645 09:54:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:56.645 Cannot find device "nvmf_tgt_br2" 00:07:56.645 09:54:55 -- nvmf/common.sh@155 -- # true 00:07:56.645 09:54:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:56.645 Cannot find device "nvmf_init_br" 00:07:56.645 09:54:55 -- nvmf/common.sh@156 -- # true 00:07:56.645 09:54:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:56.645 Cannot find device "nvmf_tgt_br" 00:07:56.645 09:54:55 -- nvmf/common.sh@157 -- # true 00:07:56.645 09:54:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:56.645 Cannot find device "nvmf_tgt_br2" 00:07:56.645 09:54:55 -- nvmf/common.sh@158 -- # true 00:07:56.645 09:54:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:56.645 Cannot find device "nvmf_br" 00:07:56.645 09:54:55 -- nvmf/common.sh@159 -- # true 00:07:56.645 09:54:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:56.645 Cannot find device "nvmf_init_if" 00:07:56.645 09:54:55 -- nvmf/common.sh@160 -- # true 00:07:56.645 09:54:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:56.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:56.645 09:54:55 -- nvmf/common.sh@161 -- # true 00:07:56.645 09:54:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:56.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:56.645 09:54:55 -- nvmf/common.sh@162 -- # true 00:07:56.645 09:54:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:56.645 09:54:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:56.645 09:54:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:56.645 09:54:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:56.645 09:54:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:56.645 09:54:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:56.645 09:54:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:56.645 09:54:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:56.645 09:54:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:56.645 09:54:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:56.645 09:54:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:56.645 09:54:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:56.645 09:54:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:56.645 09:54:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:56.645 09:54:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:56.645 09:54:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:56.903 09:54:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:56.903 09:54:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:56.903 09:54:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:56.903 09:54:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:56.904 09:54:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:56.904 09:54:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:56.904 09:54:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:56.904 09:54:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:56.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:07:56.904 00:07:56.904 --- 10.0.0.2 ping statistics --- 00:07:56.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.904 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:07:56.904 09:54:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:56.904 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:56.904 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:07:56.904 00:07:56.904 --- 10.0.0.3 ping statistics --- 00:07:56.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.904 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:07:56.904 09:54:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:56.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:56.904 00:07:56.904 --- 10.0.0.1 ping statistics --- 00:07:56.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.904 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:56.904 09:54:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.904 09:54:55 -- nvmf/common.sh@421 -- # return 0 00:07:56.904 09:54:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:56.904 09:54:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.904 09:54:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:56.904 09:54:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:56.904 09:54:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.904 09:54:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:56.904 09:54:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:56.904 09:54:55 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:56.904 09:54:55 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:56.904 09:54:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.904 09:54:55 -- common/autotest_common.sh@10 -- # set +x 00:07:56.904 09:54:55 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:56.904 09:54:55 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:56.904 09:54:55 -- target/nvmf_example.sh@34 -- # nvmfpid=72105 00:07:56.904 09:54:55 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:56.904 09:54:55 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:56.904 09:54:55 -- target/nvmf_example.sh@36 -- # waitforlisten 72105 00:07:56.904 09:54:55 -- common/autotest_common.sh@829 -- # '[' -z 72105 ']' 00:07:56.904 09:54:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.904 09:54:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.904 09:54:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.904 09:54:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.904 09:54:55 -- common/autotest_common.sh@10 -- # set +x 00:07:58.280 09:54:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.280 09:54:56 -- common/autotest_common.sh@862 -- # return 0 00:07:58.280 09:54:56 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:58.280 09:54:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.280 09:54:56 -- common/autotest_common.sh@10 -- # set +x 00:07:58.280 09:54:56 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:58.280 09:54:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.280 09:54:56 -- common/autotest_common.sh@10 -- # set +x 00:07:58.280 09:54:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.280 09:54:56 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:58.280 09:54:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.280 09:54:56 -- common/autotest_common.sh@10 -- # set +x 00:07:58.280 09:54:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.280 09:54:56 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:58.280 09:54:56 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:58.280 09:54:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.280 09:54:56 -- common/autotest_common.sh@10 -- # set +x 00:07:58.280 09:54:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.280 09:54:56 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:58.280 09:54:56 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:58.280 09:54:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.280 09:54:56 -- common/autotest_common.sh@10 -- # set +x 00:07:58.280 09:54:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.280 09:54:56 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.280 09:54:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.280 09:54:56 -- common/autotest_common.sh@10 -- # set +x 00:07:58.280 09:54:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.280 09:54:56 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:58.280 09:54:56 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:08.257 Initializing NVMe Controllers 00:08:08.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:08.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:08.257 Initialization complete. Launching workers. 00:08:08.257 ======================================================== 00:08:08.257 Latency(us) 00:08:08.257 Device Information : IOPS MiB/s Average min max 00:08:08.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16659.67 65.08 3841.21 608.42 25204.00 00:08:08.257 ======================================================== 00:08:08.257 Total : 16659.67 65.08 3841.21 608.42 25204.00 00:08:08.257 00:08:08.257 09:55:06 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:08.257 09:55:06 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:08.257 09:55:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:08.257 09:55:06 -- nvmf/common.sh@116 -- # sync 00:08:08.516 09:55:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:08.516 09:55:06 -- nvmf/common.sh@119 -- # set +e 00:08:08.516 09:55:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:08.516 09:55:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:08.516 rmmod nvme_tcp 00:08:08.516 rmmod nvme_fabrics 00:08:08.516 rmmod nvme_keyring 00:08:08.516 09:55:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:08.516 09:55:06 -- nvmf/common.sh@123 -- # set -e 00:08:08.516 09:55:06 -- nvmf/common.sh@124 -- # return 0 00:08:08.516 09:55:06 -- nvmf/common.sh@477 -- # '[' -n 72105 ']' 00:08:08.516 09:55:06 -- nvmf/common.sh@478 -- # killprocess 72105 00:08:08.516 09:55:06 -- common/autotest_common.sh@936 -- # '[' -z 72105 ']' 00:08:08.516 09:55:06 -- common/autotest_common.sh@940 -- # kill -0 72105 00:08:08.516 09:55:06 -- common/autotest_common.sh@941 -- # uname 00:08:08.516 09:55:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:08.516 09:55:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72105 00:08:08.516 09:55:07 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:08.516 09:55:07 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:08.516 killing process with pid 72105 00:08:08.516 09:55:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72105' 00:08:08.516 09:55:07 -- common/autotest_common.sh@955 -- # kill 72105 00:08:08.516 09:55:07 -- common/autotest_common.sh@960 -- # wait 72105 00:08:08.774 nvmf threads initialize successfully 00:08:08.774 bdev subsystem init successfully 00:08:08.775 created a nvmf target service 00:08:08.775 create targets's poll groups done 00:08:08.775 all subsystems of target started 00:08:08.775 nvmf target is running 00:08:08.775 all subsystems of target stopped 00:08:08.775 destroy targets's poll groups done 00:08:08.775 destroyed the nvmf target service 00:08:08.775 bdev subsystem finish successfully 00:08:08.775 nvmf threads destroy successfully 00:08:08.775 09:55:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:08.775 09:55:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:08.775 09:55:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:08.775 09:55:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.775 09:55:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:08.775 09:55:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.775 09:55:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.775 09:55:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.775 09:55:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:08.775 09:55:07 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:08.775 09:55:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.775 09:55:07 -- common/autotest_common.sh@10 -- # set +x 00:08:08.775 00:08:08.775 real 0m12.543s 00:08:08.775 user 0m44.874s 00:08:08.775 sys 0m2.154s 00:08:08.775 09:55:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.775 09:55:07 -- common/autotest_common.sh@10 -- # set +x 00:08:08.775 ************************************ 00:08:08.775 END TEST nvmf_example 00:08:08.775 ************************************ 00:08:09.034 09:55:07 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:09.034 09:55:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:09.034 09:55:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.034 09:55:07 -- common/autotest_common.sh@10 -- # set +x 00:08:09.034 ************************************ 00:08:09.034 START TEST nvmf_filesystem 00:08:09.034 ************************************ 00:08:09.034 09:55:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:09.034 * Looking for test storage... 00:08:09.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.034 09:55:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:09.034 09:55:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:09.034 09:55:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:09.034 09:55:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:09.034 09:55:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:09.034 09:55:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:09.034 09:55:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:09.034 09:55:07 -- scripts/common.sh@335 -- # IFS=.-: 00:08:09.034 09:55:07 -- scripts/common.sh@335 -- # read -ra ver1 00:08:09.034 09:55:07 -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.034 09:55:07 -- scripts/common.sh@336 -- # read -ra ver2 00:08:09.034 09:55:07 -- scripts/common.sh@337 -- # local 'op=<' 00:08:09.034 09:55:07 -- scripts/common.sh@339 -- # ver1_l=2 00:08:09.034 09:55:07 -- scripts/common.sh@340 -- # ver2_l=1 00:08:09.034 09:55:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:09.034 09:55:07 -- scripts/common.sh@343 -- # case "$op" in 00:08:09.034 09:55:07 -- scripts/common.sh@344 -- # : 1 00:08:09.034 09:55:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:09.034 09:55:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.034 09:55:07 -- scripts/common.sh@364 -- # decimal 1 00:08:09.034 09:55:07 -- scripts/common.sh@352 -- # local d=1 00:08:09.034 09:55:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.034 09:55:07 -- scripts/common.sh@354 -- # echo 1 00:08:09.034 09:55:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:09.034 09:55:07 -- scripts/common.sh@365 -- # decimal 2 00:08:09.034 09:55:07 -- scripts/common.sh@352 -- # local d=2 00:08:09.034 09:55:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.034 09:55:07 -- scripts/common.sh@354 -- # echo 2 00:08:09.034 09:55:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:09.034 09:55:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:09.034 09:55:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:09.034 09:55:07 -- scripts/common.sh@367 -- # return 0 00:08:09.034 09:55:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.035 09:55:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:09.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.035 --rc genhtml_branch_coverage=1 00:08:09.035 --rc genhtml_function_coverage=1 00:08:09.035 --rc genhtml_legend=1 00:08:09.035 --rc geninfo_all_blocks=1 00:08:09.035 --rc geninfo_unexecuted_blocks=1 00:08:09.035 00:08:09.035 ' 00:08:09.035 09:55:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:09.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.035 --rc genhtml_branch_coverage=1 00:08:09.035 --rc genhtml_function_coverage=1 00:08:09.035 --rc genhtml_legend=1 00:08:09.035 --rc geninfo_all_blocks=1 00:08:09.035 --rc geninfo_unexecuted_blocks=1 00:08:09.035 00:08:09.035 ' 00:08:09.035 09:55:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:09.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.035 --rc genhtml_branch_coverage=1 00:08:09.035 --rc genhtml_function_coverage=1 00:08:09.035 --rc genhtml_legend=1 00:08:09.035 --rc geninfo_all_blocks=1 00:08:09.035 --rc geninfo_unexecuted_blocks=1 00:08:09.035 00:08:09.035 ' 00:08:09.035 09:55:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:09.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.035 --rc genhtml_branch_coverage=1 00:08:09.035 --rc genhtml_function_coverage=1 00:08:09.035 --rc genhtml_legend=1 00:08:09.035 --rc geninfo_all_blocks=1 00:08:09.035 --rc geninfo_unexecuted_blocks=1 00:08:09.035 00:08:09.035 ' 00:08:09.035 09:55:07 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:09.035 09:55:07 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:09.035 09:55:07 -- common/autotest_common.sh@34 -- # set -e 00:08:09.035 09:55:07 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:09.035 09:55:07 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:09.035 09:55:07 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:09.035 09:55:07 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:09.035 09:55:07 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:09.035 09:55:07 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:09.035 09:55:07 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:09.035 09:55:07 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:09.035 09:55:07 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:09.035 09:55:07 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:09.035 09:55:07 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:09.035 09:55:07 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:09.035 09:55:07 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:09.035 09:55:07 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:09.035 09:55:07 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:09.035 09:55:07 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:09.035 09:55:07 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:09.035 09:55:07 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:09.035 09:55:07 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:09.035 09:55:07 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:09.035 09:55:07 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:09.035 09:55:07 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:09.035 09:55:07 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:09.035 09:55:07 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:09.035 09:55:07 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:09.035 09:55:07 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:09.035 09:55:07 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:09.035 09:55:07 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:09.035 09:55:07 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:09.035 09:55:07 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:09.035 09:55:07 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:09.035 09:55:07 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:09.035 09:55:07 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:09.035 09:55:07 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:09.035 09:55:07 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:09.035 09:55:07 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:09.035 09:55:07 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:09.035 09:55:07 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:09.035 09:55:07 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:09.035 09:55:07 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:09.035 09:55:07 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:09.035 09:55:07 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:09.035 09:55:07 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:09.035 09:55:07 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:09.035 09:55:07 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:09.035 09:55:07 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:09.035 09:55:07 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:09.035 09:55:07 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:09.035 09:55:07 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:09.035 09:55:07 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:09.035 09:55:07 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:09.035 09:55:07 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:09.035 09:55:07 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:09.035 09:55:07 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:09.035 09:55:07 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:09.035 09:55:07 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:09.035 09:55:07 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:09.035 09:55:07 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:09.035 09:55:07 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:09.035 09:55:07 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:09.035 09:55:07 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:09.035 09:55:07 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:08:09.035 09:55:07 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:09.035 09:55:07 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:09.035 09:55:07 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:09.035 09:55:07 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:09.035 09:55:07 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:09.035 09:55:07 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:09.035 09:55:07 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:09.035 09:55:07 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:09.035 09:55:07 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:09.035 09:55:07 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:08:09.035 09:55:07 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:09.035 09:55:07 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:09.035 09:55:07 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:09.035 09:55:07 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:09.035 09:55:07 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:09.035 09:55:07 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:09.035 09:55:07 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:09.035 09:55:07 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:09.035 09:55:07 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:09.035 09:55:07 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:09.035 09:55:07 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:09.035 09:55:07 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:09.035 09:55:07 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:09.035 09:55:07 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:09.035 09:55:07 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:09.035 09:55:07 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:09.035 09:55:07 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:09.035 09:55:07 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:09.035 09:55:07 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:09.035 09:55:07 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:09.035 09:55:07 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:09.035 09:55:07 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:09.035 09:55:07 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:09.035 09:55:07 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:09.035 09:55:07 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:09.035 09:55:07 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:09.035 09:55:07 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:09.035 #define SPDK_CONFIG_H 00:08:09.035 #define SPDK_CONFIG_APPS 1 00:08:09.035 #define SPDK_CONFIG_ARCH native 00:08:09.035 #undef SPDK_CONFIG_ASAN 00:08:09.035 #define SPDK_CONFIG_AVAHI 1 00:08:09.035 #undef SPDK_CONFIG_CET 00:08:09.035 #define SPDK_CONFIG_COVERAGE 1 00:08:09.035 #define SPDK_CONFIG_CROSS_PREFIX 00:08:09.035 #undef SPDK_CONFIG_CRYPTO 00:08:09.035 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:09.035 #undef SPDK_CONFIG_CUSTOMOCF 00:08:09.035 #undef SPDK_CONFIG_DAOS 00:08:09.035 #define SPDK_CONFIG_DAOS_DIR 00:08:09.035 #define SPDK_CONFIG_DEBUG 1 00:08:09.035 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:09.035 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:08:09.036 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:08:09.036 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:08:09.036 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:09.036 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:09.036 #define SPDK_CONFIG_EXAMPLES 1 00:08:09.036 #undef SPDK_CONFIG_FC 00:08:09.036 #define SPDK_CONFIG_FC_PATH 00:08:09.036 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:09.036 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:09.036 #undef SPDK_CONFIG_FUSE 00:08:09.036 #undef SPDK_CONFIG_FUZZER 00:08:09.036 #define SPDK_CONFIG_FUZZER_LIB 00:08:09.036 #define SPDK_CONFIG_GOLANG 1 00:08:09.036 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:09.036 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:09.036 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:09.036 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:09.036 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:09.036 #define SPDK_CONFIG_IDXD 1 00:08:09.036 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:09.036 #undef SPDK_CONFIG_IPSEC_MB 00:08:09.036 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:09.036 #define SPDK_CONFIG_ISAL 1 00:08:09.036 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:09.036 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:09.036 #define SPDK_CONFIG_LIBDIR 00:08:09.036 #undef SPDK_CONFIG_LTO 00:08:09.036 #define SPDK_CONFIG_MAX_LCORES 00:08:09.036 #define SPDK_CONFIG_NVME_CUSE 1 00:08:09.036 #undef SPDK_CONFIG_OCF 00:08:09.036 #define SPDK_CONFIG_OCF_PATH 00:08:09.036 #define SPDK_CONFIG_OPENSSL_PATH 00:08:09.036 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:09.036 #undef SPDK_CONFIG_PGO_USE 00:08:09.036 #define SPDK_CONFIG_PREFIX /usr/local 00:08:09.036 #undef SPDK_CONFIG_RAID5F 00:08:09.036 #undef SPDK_CONFIG_RBD 00:08:09.036 #define SPDK_CONFIG_RDMA 1 00:08:09.036 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:09.036 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:09.036 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:09.036 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:09.036 #define SPDK_CONFIG_SHARED 1 00:08:09.036 #undef SPDK_CONFIG_SMA 00:08:09.036 #define SPDK_CONFIG_TESTS 1 00:08:09.036 #undef SPDK_CONFIG_TSAN 00:08:09.036 #define SPDK_CONFIG_UBLK 1 00:08:09.036 #define SPDK_CONFIG_UBSAN 1 00:08:09.036 #undef SPDK_CONFIG_UNIT_TESTS 00:08:09.036 #undef SPDK_CONFIG_URING 00:08:09.036 #define SPDK_CONFIG_URING_PATH 00:08:09.036 #undef SPDK_CONFIG_URING_ZNS 00:08:09.036 #define SPDK_CONFIG_USDT 1 00:08:09.036 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:09.036 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:09.036 #undef SPDK_CONFIG_VFIO_USER 00:08:09.036 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:09.036 #define SPDK_CONFIG_VHOST 1 00:08:09.036 #define SPDK_CONFIG_VIRTIO 1 00:08:09.036 #undef SPDK_CONFIG_VTUNE 00:08:09.036 #define SPDK_CONFIG_VTUNE_DIR 00:08:09.036 #define SPDK_CONFIG_WERROR 1 00:08:09.036 #define SPDK_CONFIG_WPDK_DIR 00:08:09.036 #undef SPDK_CONFIG_XNVME 00:08:09.036 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:09.036 09:55:07 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:09.036 09:55:07 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:09.036 09:55:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.036 09:55:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.036 09:55:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.036 09:55:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.036 09:55:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.036 09:55:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.036 09:55:07 -- paths/export.sh@5 -- # export PATH 00:08:09.036 09:55:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.036 09:55:07 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:09.036 09:55:07 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:09.036 09:55:07 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:09.297 09:55:07 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:09.297 09:55:07 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:09.297 09:55:07 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:09.297 09:55:07 -- pm/common@16 -- # TEST_TAG=N/A 00:08:09.297 09:55:07 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:09.297 09:55:07 -- common/autotest_common.sh@52 -- # : 1 00:08:09.297 09:55:07 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:09.297 09:55:07 -- common/autotest_common.sh@56 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:09.297 09:55:07 -- common/autotest_common.sh@58 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:09.297 09:55:07 -- common/autotest_common.sh@60 -- # : 1 00:08:09.297 09:55:07 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:09.297 09:55:07 -- common/autotest_common.sh@62 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:09.297 09:55:07 -- common/autotest_common.sh@64 -- # : 00:08:09.297 09:55:07 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:09.297 09:55:07 -- common/autotest_common.sh@66 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:09.297 09:55:07 -- common/autotest_common.sh@68 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:09.297 09:55:07 -- common/autotest_common.sh@70 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:09.297 09:55:07 -- common/autotest_common.sh@72 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:09.297 09:55:07 -- common/autotest_common.sh@74 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:09.297 09:55:07 -- common/autotest_common.sh@76 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:09.297 09:55:07 -- common/autotest_common.sh@78 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:09.297 09:55:07 -- common/autotest_common.sh@80 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:09.297 09:55:07 -- common/autotest_common.sh@82 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:09.297 09:55:07 -- common/autotest_common.sh@84 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:09.297 09:55:07 -- common/autotest_common.sh@86 -- # : 1 00:08:09.297 09:55:07 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:09.297 09:55:07 -- common/autotest_common.sh@88 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:09.297 09:55:07 -- common/autotest_common.sh@90 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:09.297 09:55:07 -- common/autotest_common.sh@92 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:09.297 09:55:07 -- common/autotest_common.sh@94 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:09.297 09:55:07 -- common/autotest_common.sh@96 -- # : tcp 00:08:09.297 09:55:07 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:09.297 09:55:07 -- common/autotest_common.sh@98 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:09.297 09:55:07 -- common/autotest_common.sh@100 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:09.297 09:55:07 -- common/autotest_common.sh@102 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:09.297 09:55:07 -- common/autotest_common.sh@104 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:09.297 09:55:07 -- common/autotest_common.sh@106 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:09.297 09:55:07 -- common/autotest_common.sh@108 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:09.297 09:55:07 -- common/autotest_common.sh@110 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:09.297 09:55:07 -- common/autotest_common.sh@112 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:09.297 09:55:07 -- common/autotest_common.sh@114 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:09.297 09:55:07 -- common/autotest_common.sh@116 -- # : 1 00:08:09.297 09:55:07 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:09.297 09:55:07 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:08:09.297 09:55:07 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:09.297 09:55:07 -- common/autotest_common.sh@120 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:09.297 09:55:07 -- common/autotest_common.sh@122 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:09.297 09:55:07 -- common/autotest_common.sh@124 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:09.297 09:55:07 -- common/autotest_common.sh@126 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:09.297 09:55:07 -- common/autotest_common.sh@128 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:09.297 09:55:07 -- common/autotest_common.sh@130 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:09.297 09:55:07 -- common/autotest_common.sh@132 -- # : v23.11 00:08:09.297 09:55:07 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:09.297 09:55:07 -- common/autotest_common.sh@134 -- # : true 00:08:09.297 09:55:07 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:09.297 09:55:07 -- common/autotest_common.sh@136 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:09.297 09:55:07 -- common/autotest_common.sh@138 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:09.297 09:55:07 -- common/autotest_common.sh@140 -- # : 1 00:08:09.297 09:55:07 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:09.297 09:55:07 -- common/autotest_common.sh@142 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:09.297 09:55:07 -- common/autotest_common.sh@144 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:09.297 09:55:07 -- common/autotest_common.sh@146 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:09.297 09:55:07 -- common/autotest_common.sh@148 -- # : 00:08:09.297 09:55:07 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:09.297 09:55:07 -- common/autotest_common.sh@150 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:09.297 09:55:07 -- common/autotest_common.sh@152 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:09.297 09:55:07 -- common/autotest_common.sh@154 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:09.297 09:55:07 -- common/autotest_common.sh@156 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:09.297 09:55:07 -- common/autotest_common.sh@158 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:09.297 09:55:07 -- common/autotest_common.sh@160 -- # : 0 00:08:09.297 09:55:07 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:09.297 09:55:07 -- common/autotest_common.sh@163 -- # : 00:08:09.297 09:55:07 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:09.297 09:55:07 -- common/autotest_common.sh@165 -- # : 1 00:08:09.297 09:55:07 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:09.297 09:55:07 -- common/autotest_common.sh@167 -- # : 1 00:08:09.297 09:55:07 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:09.297 09:55:07 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:09.297 09:55:07 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:09.297 09:55:07 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:09.297 09:55:07 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:09.297 09:55:07 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:09.297 09:55:07 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:09.297 09:55:07 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:09.297 09:55:07 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:09.297 09:55:07 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:09.297 09:55:07 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:09.297 09:55:07 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:09.297 09:55:07 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:09.298 09:55:07 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:09.298 09:55:07 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:09.298 09:55:07 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:09.298 09:55:07 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:09.298 09:55:07 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:09.298 09:55:07 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:09.298 09:55:07 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:09.298 09:55:07 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:09.298 09:55:07 -- common/autotest_common.sh@196 -- # cat 00:08:09.298 09:55:07 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:09.298 09:55:07 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:09.298 09:55:07 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:09.298 09:55:07 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:09.298 09:55:07 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:09.298 09:55:07 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:09.298 09:55:07 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:09.298 09:55:07 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:09.298 09:55:07 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:09.298 09:55:07 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:09.298 09:55:07 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:09.298 09:55:07 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:09.298 09:55:07 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:09.298 09:55:07 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:09.298 09:55:07 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:09.298 09:55:07 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:09.298 09:55:07 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:09.298 09:55:07 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:09.298 09:55:07 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:09.298 09:55:07 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:09.298 09:55:07 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:09.298 09:55:07 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:09.298 09:55:07 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:09.298 09:55:07 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:08:09.298 09:55:07 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:09.298 09:55:07 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:09.298 09:55:07 -- common/autotest_common.sh@255 -- # lcov_opt= 00:08:09.298 09:55:07 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:09.298 09:55:07 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:09.298 09:55:07 -- common/autotest_common.sh@259 -- # valgrind= 00:08:09.298 09:55:07 -- common/autotest_common.sh@265 -- # uname -s 00:08:09.298 09:55:07 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:09.298 09:55:07 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:09.298 09:55:07 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:09.298 09:55:07 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:09.298 09:55:07 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:09.298 09:55:07 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:09.298 09:55:07 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:09.298 09:55:07 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:08:09.298 09:55:07 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:09.298 09:55:07 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:09.298 09:55:07 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:09.298 09:55:07 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:09.298 09:55:07 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:09.298 09:55:07 -- common/autotest_common.sh@301 -- # for i in "$@" 00:08:09.298 09:55:07 -- common/autotest_common.sh@302 -- # case "$i" in 00:08:09.298 09:55:07 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:08:09.298 09:55:07 -- common/autotest_common.sh@319 -- # [[ -z 72359 ]] 00:08:09.298 09:55:07 -- common/autotest_common.sh@319 -- # kill -0 72359 00:08:09.298 09:55:07 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:09.298 09:55:07 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:09.298 09:55:07 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:09.298 09:55:07 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:09.298 09:55:07 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:09.298 09:55:07 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:09.298 09:55:07 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:09.298 09:55:07 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:09.298 09:55:07 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.3HMxlz 00:08:09.298 09:55:07 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:09.298 09:55:07 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:09.298 09:55:07 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:09.298 09:55:07 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.3HMxlz/tests/target /tmp/spdk.3HMxlz 00:08:09.298 09:55:07 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:09.298 09:55:07 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:09.298 09:55:07 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:09.298 09:55:07 -- common/autotest_common.sh@328 -- # df -T 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293764608 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:09.298 09:55:07 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289588224 00:08:09.298 09:55:07 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:08:09.298 09:55:07 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:09.298 09:55:07 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265167872 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:08:09.298 09:55:07 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:08:09.298 09:55:07 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:08:09.298 09:55:07 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:08:09.298 09:55:07 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293764608 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:09.298 09:55:07 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289588224 00:08:09.298 09:55:07 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266286080 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:08:09.298 09:55:07 -- common/autotest_common.sh@364 -- # uses["$mount"]=139264 00:08:09.298 09:55:07 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:08:09.298 09:55:07 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:08:09.298 09:55:07 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:08:09.298 09:55:07 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:08:09.298 09:55:07 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253269504 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253281792 00:08:09.298 09:55:07 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:09.298 09:55:07 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:08:09.298 09:55:07 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # avails["$mount"]=97216176128 00:08:09.298 09:55:07 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:08:09.298 09:55:07 -- common/autotest_common.sh@364 -- # uses["$mount"]=2486603776 00:08:09.298 09:55:07 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:09.299 09:55:07 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:09.299 * Looking for test storage... 00:08:09.299 09:55:07 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:09.299 09:55:07 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:09.299 09:55:07 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.299 09:55:07 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:09.299 09:55:07 -- common/autotest_common.sh@373 -- # mount=/home 00:08:09.299 09:55:07 -- common/autotest_common.sh@375 -- # target_space=13293764608 00:08:09.299 09:55:07 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:09.299 09:55:07 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:09.299 09:55:07 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:08:09.299 09:55:07 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:08:09.299 09:55:07 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:08:09.299 09:55:07 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.299 09:55:07 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.299 09:55:07 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.299 09:55:07 -- common/autotest_common.sh@390 -- # return 0 00:08:09.299 09:55:07 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:09.299 09:55:07 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:09.299 09:55:07 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:09.299 09:55:07 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:09.299 09:55:07 -- common/autotest_common.sh@1682 -- # true 00:08:09.299 09:55:07 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:09.299 09:55:07 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:09.299 09:55:07 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:09.299 09:55:07 -- common/autotest_common.sh@27 -- # exec 00:08:09.299 09:55:07 -- common/autotest_common.sh@29 -- # exec 00:08:09.299 09:55:07 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:09.299 09:55:07 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:09.299 09:55:07 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:09.299 09:55:07 -- common/autotest_common.sh@18 -- # set -x 00:08:09.299 09:55:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:09.299 09:55:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:09.299 09:55:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:09.299 09:55:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:09.299 09:55:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:09.299 09:55:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:09.299 09:55:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:09.299 09:55:07 -- scripts/common.sh@335 -- # IFS=.-: 00:08:09.299 09:55:07 -- scripts/common.sh@335 -- # read -ra ver1 00:08:09.299 09:55:07 -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.299 09:55:07 -- scripts/common.sh@336 -- # read -ra ver2 00:08:09.299 09:55:07 -- scripts/common.sh@337 -- # local 'op=<' 00:08:09.299 09:55:07 -- scripts/common.sh@339 -- # ver1_l=2 00:08:09.299 09:55:07 -- scripts/common.sh@340 -- # ver2_l=1 00:08:09.299 09:55:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:09.299 09:55:07 -- scripts/common.sh@343 -- # case "$op" in 00:08:09.299 09:55:07 -- scripts/common.sh@344 -- # : 1 00:08:09.299 09:55:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:09.299 09:55:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.299 09:55:07 -- scripts/common.sh@364 -- # decimal 1 00:08:09.299 09:55:07 -- scripts/common.sh@352 -- # local d=1 00:08:09.299 09:55:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.299 09:55:07 -- scripts/common.sh@354 -- # echo 1 00:08:09.299 09:55:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:09.299 09:55:07 -- scripts/common.sh@365 -- # decimal 2 00:08:09.299 09:55:07 -- scripts/common.sh@352 -- # local d=2 00:08:09.299 09:55:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.299 09:55:07 -- scripts/common.sh@354 -- # echo 2 00:08:09.299 09:55:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:09.299 09:55:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:09.299 09:55:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:09.299 09:55:07 -- scripts/common.sh@367 -- # return 0 00:08:09.299 09:55:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.299 09:55:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:09.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.299 --rc genhtml_branch_coverage=1 00:08:09.299 --rc genhtml_function_coverage=1 00:08:09.299 --rc genhtml_legend=1 00:08:09.299 --rc geninfo_all_blocks=1 00:08:09.299 --rc geninfo_unexecuted_blocks=1 00:08:09.299 00:08:09.299 ' 00:08:09.299 09:55:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:09.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.299 --rc genhtml_branch_coverage=1 00:08:09.299 --rc genhtml_function_coverage=1 00:08:09.299 --rc genhtml_legend=1 00:08:09.299 --rc geninfo_all_blocks=1 00:08:09.299 --rc geninfo_unexecuted_blocks=1 00:08:09.299 00:08:09.299 ' 00:08:09.299 09:55:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:09.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.299 --rc genhtml_branch_coverage=1 00:08:09.299 --rc genhtml_function_coverage=1 00:08:09.299 --rc genhtml_legend=1 00:08:09.299 --rc geninfo_all_blocks=1 00:08:09.299 --rc geninfo_unexecuted_blocks=1 00:08:09.299 00:08:09.299 ' 00:08:09.299 09:55:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:09.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.299 --rc genhtml_branch_coverage=1 00:08:09.299 --rc genhtml_function_coverage=1 00:08:09.299 --rc genhtml_legend=1 00:08:09.299 --rc geninfo_all_blocks=1 00:08:09.299 --rc geninfo_unexecuted_blocks=1 00:08:09.299 00:08:09.299 ' 00:08:09.299 09:55:07 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:09.299 09:55:07 -- nvmf/common.sh@7 -- # uname -s 00:08:09.299 09:55:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.299 09:55:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.299 09:55:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.299 09:55:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.299 09:55:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.299 09:55:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.299 09:55:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.299 09:55:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.299 09:55:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.299 09:55:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.299 09:55:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:08:09.299 09:55:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:08:09.299 09:55:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.299 09:55:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.299 09:55:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:09.299 09:55:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:09.299 09:55:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.299 09:55:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.299 09:55:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.299 09:55:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.299 09:55:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.299 09:55:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.299 09:55:07 -- paths/export.sh@5 -- # export PATH 00:08:09.299 09:55:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.299 09:55:07 -- nvmf/common.sh@46 -- # : 0 00:08:09.299 09:55:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:09.299 09:55:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:09.299 09:55:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:09.299 09:55:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.299 09:55:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.299 09:55:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:09.299 09:55:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:09.299 09:55:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:09.299 09:55:07 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:09.299 09:55:07 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:09.299 09:55:07 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:09.299 09:55:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:09.299 09:55:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.299 09:55:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:09.299 09:55:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:09.299 09:55:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:09.299 09:55:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.300 09:55:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.300 09:55:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.300 09:55:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:09.300 09:55:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:09.300 09:55:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:09.300 09:55:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:09.300 09:55:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:09.300 09:55:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:09.300 09:55:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.300 09:55:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.300 09:55:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:09.300 09:55:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:09.300 09:55:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:09.300 09:55:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:09.300 09:55:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:09.300 09:55:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.300 09:55:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:09.300 09:55:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:09.300 09:55:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:09.300 09:55:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:09.300 09:55:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:09.300 09:55:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:09.300 Cannot find device "nvmf_tgt_br" 00:08:09.300 09:55:07 -- nvmf/common.sh@154 -- # true 00:08:09.300 09:55:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:09.300 Cannot find device "nvmf_tgt_br2" 00:08:09.300 09:55:07 -- nvmf/common.sh@155 -- # true 00:08:09.300 09:55:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:09.300 09:55:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:09.300 Cannot find device "nvmf_tgt_br" 00:08:09.300 09:55:07 -- nvmf/common.sh@157 -- # true 00:08:09.300 09:55:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:09.300 Cannot find device "nvmf_tgt_br2" 00:08:09.300 09:55:07 -- nvmf/common.sh@158 -- # true 00:08:09.300 09:55:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:09.559 09:55:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:09.559 09:55:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:09.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.559 09:55:07 -- nvmf/common.sh@161 -- # true 00:08:09.559 09:55:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:09.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.559 09:55:07 -- nvmf/common.sh@162 -- # true 00:08:09.559 09:55:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:09.559 09:55:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:09.559 09:55:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:09.559 09:55:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:09.559 09:55:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:09.559 09:55:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:09.559 09:55:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:09.559 09:55:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:09.559 09:55:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:09.559 09:55:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:09.559 09:55:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:09.559 09:55:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:09.559 09:55:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:09.559 09:55:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:09.559 09:55:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:09.559 09:55:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:09.559 09:55:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:09.559 09:55:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:09.559 09:55:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:09.559 09:55:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:09.559 09:55:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:09.559 09:55:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:09.559 09:55:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:09.559 09:55:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:09.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:08:09.559 00:08:09.559 --- 10.0.0.2 ping statistics --- 00:08:09.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.559 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:09.559 09:55:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:09.559 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:09.559 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:08:09.559 00:08:09.559 --- 10.0.0.3 ping statistics --- 00:08:09.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.559 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:09.559 09:55:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:09.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:09.559 00:08:09.559 --- 10.0.0.1 ping statistics --- 00:08:09.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.559 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:09.559 09:55:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.559 09:55:08 -- nvmf/common.sh@421 -- # return 0 00:08:09.559 09:55:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:09.559 09:55:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.559 09:55:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:09.559 09:55:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:09.559 09:55:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.559 09:55:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:09.559 09:55:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:09.559 09:55:08 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:09.559 09:55:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:09.559 09:55:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.559 09:55:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.559 ************************************ 00:08:09.559 START TEST nvmf_filesystem_no_in_capsule 00:08:09.559 ************************************ 00:08:09.559 09:55:08 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:08:09.559 09:55:08 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:09.559 09:55:08 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:09.559 09:55:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:09.559 09:55:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:09.559 09:55:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.818 09:55:08 -- nvmf/common.sh@469 -- # nvmfpid=72526 00:08:09.818 09:55:08 -- nvmf/common.sh@470 -- # waitforlisten 72526 00:08:09.818 09:55:08 -- common/autotest_common.sh@829 -- # '[' -z 72526 ']' 00:08:09.818 09:55:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:09.818 09:55:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.818 09:55:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.818 09:55:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.818 09:55:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.818 09:55:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.818 [2024-12-16 09:55:08.228471] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:09.818 [2024-12-16 09:55:08.228534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.818 [2024-12-16 09:55:08.369312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.818 [2024-12-16 09:55:08.439401] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:09.818 [2024-12-16 09:55:08.439581] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.818 [2024-12-16 09:55:08.439599] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.818 [2024-12-16 09:55:08.439612] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.818 [2024-12-16 09:55:08.439760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.818 [2024-12-16 09:55:08.440319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.818 [2024-12-16 09:55:08.440398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.818 [2024-12-16 09:55:08.440408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.754 09:55:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.754 09:55:09 -- common/autotest_common.sh@862 -- # return 0 00:08:10.754 09:55:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:10.755 09:55:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.755 09:55:09 -- common/autotest_common.sh@10 -- # set +x 00:08:10.755 09:55:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.755 09:55:09 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:10.755 09:55:09 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:10.755 09:55:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.755 09:55:09 -- common/autotest_common.sh@10 -- # set +x 00:08:10.755 [2024-12-16 09:55:09.298815] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.755 09:55:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.755 09:55:09 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:10.755 09:55:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.755 09:55:09 -- common/autotest_common.sh@10 -- # set +x 00:08:11.013 Malloc1 00:08:11.013 09:55:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.013 09:55:09 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:11.013 09:55:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.013 09:55:09 -- common/autotest_common.sh@10 -- # set +x 00:08:11.013 09:55:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.014 09:55:09 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:11.014 09:55:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.014 09:55:09 -- common/autotest_common.sh@10 -- # set +x 00:08:11.014 09:55:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.014 09:55:09 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:11.014 09:55:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.014 09:55:09 -- common/autotest_common.sh@10 -- # set +x 00:08:11.014 [2024-12-16 09:55:09.496834] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.014 09:55:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.014 09:55:09 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:11.014 09:55:09 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:11.014 09:55:09 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:11.014 09:55:09 -- common/autotest_common.sh@1369 -- # local bs 00:08:11.014 09:55:09 -- common/autotest_common.sh@1370 -- # local nb 00:08:11.014 09:55:09 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:11.014 09:55:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.014 09:55:09 -- common/autotest_common.sh@10 -- # set +x 00:08:11.014 09:55:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.014 09:55:09 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:11.014 { 00:08:11.014 "aliases": [ 00:08:11.014 "6697077d-fa6c-4b91-9c7d-6e027322c71a" 00:08:11.014 ], 00:08:11.014 "assigned_rate_limits": { 00:08:11.014 "r_mbytes_per_sec": 0, 00:08:11.014 "rw_ios_per_sec": 0, 00:08:11.014 "rw_mbytes_per_sec": 0, 00:08:11.014 "w_mbytes_per_sec": 0 00:08:11.014 }, 00:08:11.014 "block_size": 512, 00:08:11.014 "claim_type": "exclusive_write", 00:08:11.014 "claimed": true, 00:08:11.014 "driver_specific": {}, 00:08:11.014 "memory_domains": [ 00:08:11.014 { 00:08:11.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.014 "dma_device_type": 2 00:08:11.014 } 00:08:11.014 ], 00:08:11.014 "name": "Malloc1", 00:08:11.014 "num_blocks": 1048576, 00:08:11.014 "product_name": "Malloc disk", 00:08:11.014 "supported_io_types": { 00:08:11.014 "abort": true, 00:08:11.014 "compare": false, 00:08:11.014 "compare_and_write": false, 00:08:11.014 "flush": true, 00:08:11.014 "nvme_admin": false, 00:08:11.014 "nvme_io": false, 00:08:11.014 "read": true, 00:08:11.014 "reset": true, 00:08:11.014 "unmap": true, 00:08:11.014 "write": true, 00:08:11.014 "write_zeroes": true 00:08:11.014 }, 00:08:11.014 "uuid": "6697077d-fa6c-4b91-9c7d-6e027322c71a", 00:08:11.014 "zoned": false 00:08:11.014 } 00:08:11.014 ]' 00:08:11.014 09:55:09 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:11.014 09:55:09 -- common/autotest_common.sh@1372 -- # bs=512 00:08:11.014 09:55:09 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:11.014 09:55:09 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:11.014 09:55:09 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:11.014 09:55:09 -- common/autotest_common.sh@1377 -- # echo 512 00:08:11.014 09:55:09 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:11.014 09:55:09 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:11.272 09:55:09 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:11.272 09:55:09 -- common/autotest_common.sh@1187 -- # local i=0 00:08:11.272 09:55:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:11.272 09:55:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:11.272 09:55:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:13.181 09:55:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:13.439 09:55:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:13.439 09:55:11 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:13.439 09:55:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:13.439 09:55:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:13.439 09:55:11 -- common/autotest_common.sh@1197 -- # return 0 00:08:13.439 09:55:11 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:13.439 09:55:11 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:13.439 09:55:11 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:13.439 09:55:11 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:13.439 09:55:11 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:13.439 09:55:11 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:13.439 09:55:11 -- setup/common.sh@80 -- # echo 536870912 00:08:13.439 09:55:11 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:13.439 09:55:11 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:13.439 09:55:11 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:13.439 09:55:11 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:13.439 09:55:11 -- target/filesystem.sh@69 -- # partprobe 00:08:13.439 09:55:11 -- target/filesystem.sh@70 -- # sleep 1 00:08:14.375 09:55:12 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:14.375 09:55:12 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:14.375 09:55:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:14.375 09:55:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.375 09:55:12 -- common/autotest_common.sh@10 -- # set +x 00:08:14.375 ************************************ 00:08:14.375 START TEST filesystem_ext4 00:08:14.375 ************************************ 00:08:14.375 09:55:12 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:14.375 09:55:12 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:14.375 09:55:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:14.375 09:55:12 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:14.375 09:55:12 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:14.375 09:55:12 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:14.375 09:55:12 -- common/autotest_common.sh@914 -- # local i=0 00:08:14.375 09:55:12 -- common/autotest_common.sh@915 -- # local force 00:08:14.375 09:55:12 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:14.375 09:55:12 -- common/autotest_common.sh@918 -- # force=-F 00:08:14.375 09:55:12 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:14.375 mke2fs 1.47.0 (5-Feb-2023) 00:08:14.634 Discarding device blocks: 0/522240 done 00:08:14.634 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:14.634 Filesystem UUID: 1c2a92f6-e694-4cab-af1c-4b9befcafb7e 00:08:14.634 Superblock backups stored on blocks: 00:08:14.634 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:14.634 00:08:14.634 Allocating group tables: 0/64 done 00:08:14.634 Writing inode tables: 0/64 done 00:08:14.634 Creating journal (8192 blocks): done 00:08:14.634 Writing superblocks and filesystem accounting information: 0/64 done 00:08:14.634 00:08:14.634 09:55:13 -- common/autotest_common.sh@931 -- # return 0 00:08:14.634 09:55:13 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.903 09:55:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.903 09:55:18 -- target/filesystem.sh@25 -- # sync 00:08:20.171 09:55:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:20.171 09:55:18 -- target/filesystem.sh@27 -- # sync 00:08:20.171 09:55:18 -- target/filesystem.sh@29 -- # i=0 00:08:20.171 09:55:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:20.171 09:55:18 -- target/filesystem.sh@37 -- # kill -0 72526 00:08:20.171 09:55:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.171 09:55:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:20.171 09:55:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.171 09:55:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.172 00:08:20.172 real 0m5.631s 00:08:20.172 user 0m0.027s 00:08:20.172 sys 0m0.060s 00:08:20.172 09:55:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.172 09:55:18 -- common/autotest_common.sh@10 -- # set +x 00:08:20.172 ************************************ 00:08:20.172 END TEST filesystem_ext4 00:08:20.172 ************************************ 00:08:20.172 09:55:18 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:20.172 09:55:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:20.172 09:55:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.172 09:55:18 -- common/autotest_common.sh@10 -- # set +x 00:08:20.172 ************************************ 00:08:20.172 START TEST filesystem_btrfs 00:08:20.172 ************************************ 00:08:20.172 09:55:18 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:20.172 09:55:18 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:20.172 09:55:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.172 09:55:18 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:20.172 09:55:18 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:20.172 09:55:18 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:20.172 09:55:18 -- common/autotest_common.sh@914 -- # local i=0 00:08:20.172 09:55:18 -- common/autotest_common.sh@915 -- # local force 00:08:20.172 09:55:18 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:20.172 09:55:18 -- common/autotest_common.sh@920 -- # force=-f 00:08:20.172 09:55:18 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:20.447 btrfs-progs v6.8.1 00:08:20.447 See https://btrfs.readthedocs.io for more information. 00:08:20.447 00:08:20.447 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:20.447 NOTE: several default settings have changed in version 5.15, please make sure 00:08:20.447 this does not affect your deployments: 00:08:20.447 - DUP for metadata (-m dup) 00:08:20.447 - enabled no-holes (-O no-holes) 00:08:20.447 - enabled free-space-tree (-R free-space-tree) 00:08:20.447 00:08:20.447 Label: (null) 00:08:20.447 UUID: cf122048-982c-4f2e-8a08-d322302bed55 00:08:20.447 Node size: 16384 00:08:20.447 Sector size: 4096 (CPU page size: 4096) 00:08:20.447 Filesystem size: 510.00MiB 00:08:20.447 Block group profiles: 00:08:20.447 Data: single 8.00MiB 00:08:20.447 Metadata: DUP 32.00MiB 00:08:20.447 System: DUP 8.00MiB 00:08:20.447 SSD detected: yes 00:08:20.447 Zoned device: no 00:08:20.447 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:20.447 Checksum: crc32c 00:08:20.447 Number of devices: 1 00:08:20.447 Devices: 00:08:20.447 ID SIZE PATH 00:08:20.447 1 510.00MiB /dev/nvme0n1p1 00:08:20.447 00:08:20.447 09:55:18 -- common/autotest_common.sh@931 -- # return 0 00:08:20.447 09:55:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:20.447 09:55:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:20.447 09:55:18 -- target/filesystem.sh@25 -- # sync 00:08:20.447 09:55:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:20.447 09:55:18 -- target/filesystem.sh@27 -- # sync 00:08:20.447 09:55:18 -- target/filesystem.sh@29 -- # i=0 00:08:20.447 09:55:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:20.447 09:55:18 -- target/filesystem.sh@37 -- # kill -0 72526 00:08:20.447 09:55:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:20.447 09:55:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.447 09:55:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.447 09:55:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.447 00:08:20.447 real 0m0.226s 00:08:20.447 user 0m0.014s 00:08:20.447 sys 0m0.064s 00:08:20.447 09:55:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:20.447 09:55:18 -- common/autotest_common.sh@10 -- # set +x 00:08:20.447 ************************************ 00:08:20.447 END TEST filesystem_btrfs 00:08:20.447 ************************************ 00:08:20.447 09:55:18 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:20.447 09:55:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:20.447 09:55:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.447 09:55:18 -- common/autotest_common.sh@10 -- # set +x 00:08:20.447 ************************************ 00:08:20.447 START TEST filesystem_xfs 00:08:20.447 ************************************ 00:08:20.447 09:55:18 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:20.447 09:55:18 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:20.447 09:55:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.447 09:55:18 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:20.447 09:55:18 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:20.447 09:55:18 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:20.447 09:55:18 -- common/autotest_common.sh@914 -- # local i=0 00:08:20.447 09:55:18 -- common/autotest_common.sh@915 -- # local force 00:08:20.447 09:55:18 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:20.447 09:55:18 -- common/autotest_common.sh@920 -- # force=-f 00:08:20.447 09:55:18 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:20.447 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:20.447 = sectsz=512 attr=2, projid32bit=1 00:08:20.447 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:20.447 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:20.447 data = bsize=4096 blocks=130560, imaxpct=25 00:08:20.447 = sunit=0 swidth=0 blks 00:08:20.447 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:20.447 log =internal log bsize=4096 blocks=16384, version=2 00:08:20.447 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:20.447 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:21.383 Discarding blocks...Done. 00:08:21.383 09:55:19 -- common/autotest_common.sh@931 -- # return 0 00:08:21.383 09:55:19 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:23.912 09:55:21 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:23.912 09:55:21 -- target/filesystem.sh@25 -- # sync 00:08:23.912 09:55:22 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:23.912 09:55:22 -- target/filesystem.sh@27 -- # sync 00:08:23.912 09:55:22 -- target/filesystem.sh@29 -- # i=0 00:08:23.912 09:55:22 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:23.912 09:55:22 -- target/filesystem.sh@37 -- # kill -0 72526 00:08:23.912 09:55:22 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:23.912 09:55:22 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:23.912 09:55:22 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:23.912 09:55:22 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:23.912 00:08:23.912 real 0m3.127s 00:08:23.912 user 0m0.021s 00:08:23.912 sys 0m0.056s 00:08:23.912 09:55:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.912 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:08:23.912 ************************************ 00:08:23.912 END TEST filesystem_xfs 00:08:23.912 ************************************ 00:08:23.912 09:55:22 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:23.912 09:55:22 -- target/filesystem.sh@93 -- # sync 00:08:23.912 09:55:22 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:23.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.912 09:55:22 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:23.912 09:55:22 -- common/autotest_common.sh@1208 -- # local i=0 00:08:23.912 09:55:22 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:23.912 09:55:22 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.912 09:55:22 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:23.912 09:55:22 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.912 09:55:22 -- common/autotest_common.sh@1220 -- # return 0 00:08:23.912 09:55:22 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:23.912 09:55:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.912 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:08:23.912 09:55:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.912 09:55:22 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:23.912 09:55:22 -- target/filesystem.sh@101 -- # killprocess 72526 00:08:23.912 09:55:22 -- common/autotest_common.sh@936 -- # '[' -z 72526 ']' 00:08:23.912 09:55:22 -- common/autotest_common.sh@940 -- # kill -0 72526 00:08:23.912 09:55:22 -- common/autotest_common.sh@941 -- # uname 00:08:23.912 09:55:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:23.912 09:55:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72526 00:08:23.912 09:55:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:23.912 09:55:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:23.912 killing process with pid 72526 00:08:23.912 09:55:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72526' 00:08:23.912 09:55:22 -- common/autotest_common.sh@955 -- # kill 72526 00:08:23.912 09:55:22 -- common/autotest_common.sh@960 -- # wait 72526 00:08:24.170 09:55:22 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:24.170 00:08:24.170 real 0m14.451s 00:08:24.170 user 0m55.387s 00:08:24.170 sys 0m2.112s 00:08:24.170 ************************************ 00:08:24.170 END TEST nvmf_filesystem_no_in_capsule 00:08:24.170 09:55:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.170 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:08:24.170 ************************************ 00:08:24.170 09:55:22 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:24.170 09:55:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:24.170 09:55:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.170 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:08:24.170 ************************************ 00:08:24.170 START TEST nvmf_filesystem_in_capsule 00:08:24.170 ************************************ 00:08:24.170 09:55:22 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:24.170 09:55:22 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:24.170 09:55:22 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:24.170 09:55:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:24.170 09:55:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:24.170 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:08:24.170 09:55:22 -- nvmf/common.sh@469 -- # nvmfpid=72904 00:08:24.170 09:55:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:24.170 09:55:22 -- nvmf/common.sh@470 -- # waitforlisten 72904 00:08:24.170 09:55:22 -- common/autotest_common.sh@829 -- # '[' -z 72904 ']' 00:08:24.170 09:55:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.170 09:55:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.170 09:55:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.170 09:55:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.170 09:55:22 -- common/autotest_common.sh@10 -- # set +x 00:08:24.170 [2024-12-16 09:55:22.729593] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:24.170 [2024-12-16 09:55:22.729696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.429 [2024-12-16 09:55:22.863639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.429 [2024-12-16 09:55:22.920715] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:24.429 [2024-12-16 09:55:22.920934] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.429 [2024-12-16 09:55:22.920950] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.429 [2024-12-16 09:55:22.920960] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.429 [2024-12-16 09:55:22.921315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.429 [2024-12-16 09:55:22.921434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.429 [2024-12-16 09:55:22.921642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.429 [2024-12-16 09:55:22.921649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.365 09:55:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.365 09:55:23 -- common/autotest_common.sh@862 -- # return 0 00:08:25.365 09:55:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:25.365 09:55:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.365 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.365 09:55:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.365 09:55:23 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:25.365 09:55:23 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:25.365 09:55:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.365 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.365 [2024-12-16 09:55:23.790031] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.365 09:55:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.365 09:55:23 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:25.365 09:55:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.365 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.365 Malloc1 00:08:25.365 09:55:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.365 09:55:23 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:25.365 09:55:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.365 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.365 09:55:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.365 09:55:23 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:25.365 09:55:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.365 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.365 09:55:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.365 09:55:23 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:25.365 09:55:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.365 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.365 [2024-12-16 09:55:23.987149] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.624 09:55:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.624 09:55:23 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:25.624 09:55:23 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:25.624 09:55:23 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:25.624 09:55:23 -- common/autotest_common.sh@1369 -- # local bs 00:08:25.624 09:55:23 -- common/autotest_common.sh@1370 -- # local nb 00:08:25.624 09:55:23 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:25.624 09:55:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.624 09:55:23 -- common/autotest_common.sh@10 -- # set +x 00:08:25.624 09:55:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.624 09:55:24 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:25.624 { 00:08:25.624 "aliases": [ 00:08:25.624 "391f861b-613d-445c-8e3a-6c2d8e986bab" 00:08:25.624 ], 00:08:25.624 "assigned_rate_limits": { 00:08:25.624 "r_mbytes_per_sec": 0, 00:08:25.624 "rw_ios_per_sec": 0, 00:08:25.624 "rw_mbytes_per_sec": 0, 00:08:25.624 "w_mbytes_per_sec": 0 00:08:25.624 }, 00:08:25.624 "block_size": 512, 00:08:25.624 "claim_type": "exclusive_write", 00:08:25.624 "claimed": true, 00:08:25.624 "driver_specific": {}, 00:08:25.624 "memory_domains": [ 00:08:25.624 { 00:08:25.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.624 "dma_device_type": 2 00:08:25.624 } 00:08:25.624 ], 00:08:25.624 "name": "Malloc1", 00:08:25.624 "num_blocks": 1048576, 00:08:25.624 "product_name": "Malloc disk", 00:08:25.624 "supported_io_types": { 00:08:25.624 "abort": true, 00:08:25.624 "compare": false, 00:08:25.624 "compare_and_write": false, 00:08:25.624 "flush": true, 00:08:25.624 "nvme_admin": false, 00:08:25.624 "nvme_io": false, 00:08:25.624 "read": true, 00:08:25.624 "reset": true, 00:08:25.624 "unmap": true, 00:08:25.624 "write": true, 00:08:25.624 "write_zeroes": true 00:08:25.624 }, 00:08:25.624 "uuid": "391f861b-613d-445c-8e3a-6c2d8e986bab", 00:08:25.624 "zoned": false 00:08:25.624 } 00:08:25.624 ]' 00:08:25.624 09:55:24 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:25.624 09:55:24 -- common/autotest_common.sh@1372 -- # bs=512 00:08:25.624 09:55:24 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:25.624 09:55:24 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:25.624 09:55:24 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:25.624 09:55:24 -- common/autotest_common.sh@1377 -- # echo 512 00:08:25.624 09:55:24 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:25.624 09:55:24 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:25.883 09:55:24 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:25.883 09:55:24 -- common/autotest_common.sh@1187 -- # local i=0 00:08:25.883 09:55:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:25.883 09:55:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:25.883 09:55:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:27.785 09:55:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:27.785 09:55:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:27.785 09:55:26 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:27.785 09:55:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:27.785 09:55:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:27.785 09:55:26 -- common/autotest_common.sh@1197 -- # return 0 00:08:27.785 09:55:26 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:27.785 09:55:26 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:27.785 09:55:26 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:27.785 09:55:26 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:27.785 09:55:26 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:27.785 09:55:26 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:27.785 09:55:26 -- setup/common.sh@80 -- # echo 536870912 00:08:27.785 09:55:26 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:27.785 09:55:26 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:27.785 09:55:26 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:27.785 09:55:26 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:27.785 09:55:26 -- target/filesystem.sh@69 -- # partprobe 00:08:28.043 09:55:26 -- target/filesystem.sh@70 -- # sleep 1 00:08:28.979 09:55:27 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:28.979 09:55:27 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:28.979 09:55:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:28.979 09:55:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:28.979 09:55:27 -- common/autotest_common.sh@10 -- # set +x 00:08:28.979 ************************************ 00:08:28.979 START TEST filesystem_in_capsule_ext4 00:08:28.979 ************************************ 00:08:28.979 09:55:27 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:28.979 09:55:27 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:28.979 09:55:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:28.979 09:55:27 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:28.979 09:55:27 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:28.979 09:55:27 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:28.979 09:55:27 -- common/autotest_common.sh@914 -- # local i=0 00:08:28.979 09:55:27 -- common/autotest_common.sh@915 -- # local force 00:08:28.979 09:55:27 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:28.979 09:55:27 -- common/autotest_common.sh@918 -- # force=-F 00:08:28.979 09:55:27 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:28.979 mke2fs 1.47.0 (5-Feb-2023) 00:08:28.979 Discarding device blocks: 0/522240 done 00:08:28.979 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:28.979 Filesystem UUID: e7e3f3a0-cb41-41c8-ab36-7191a48aaff5 00:08:28.979 Superblock backups stored on blocks: 00:08:28.979 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:28.979 00:08:28.979 Allocating group tables: 0/64 done 00:08:28.979 Writing inode tables: 0/64 done 00:08:28.979 Creating journal (8192 blocks): done 00:08:28.979 Writing superblocks and filesystem accounting information: 0/64 done 00:08:28.979 00:08:28.979 09:55:27 -- common/autotest_common.sh@931 -- # return 0 00:08:28.979 09:55:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:34.249 09:55:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:34.508 09:55:32 -- target/filesystem.sh@25 -- # sync 00:08:34.508 09:55:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:34.508 09:55:32 -- target/filesystem.sh@27 -- # sync 00:08:34.508 09:55:32 -- target/filesystem.sh@29 -- # i=0 00:08:34.508 09:55:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:34.508 09:55:32 -- target/filesystem.sh@37 -- # kill -0 72904 00:08:34.508 09:55:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:34.508 09:55:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:34.508 09:55:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:34.508 09:55:32 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:34.508 00:08:34.508 real 0m5.549s 00:08:34.508 user 0m0.025s 00:08:34.508 sys 0m0.063s 00:08:34.508 09:55:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.508 09:55:33 -- common/autotest_common.sh@10 -- # set +x 00:08:34.508 ************************************ 00:08:34.508 END TEST filesystem_in_capsule_ext4 00:08:34.508 ************************************ 00:08:34.508 09:55:33 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:34.508 09:55:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:34.508 09:55:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.508 09:55:33 -- common/autotest_common.sh@10 -- # set +x 00:08:34.508 ************************************ 00:08:34.508 START TEST filesystem_in_capsule_btrfs 00:08:34.508 ************************************ 00:08:34.508 09:55:33 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:34.508 09:55:33 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:34.508 09:55:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:34.508 09:55:33 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:34.508 09:55:33 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:34.508 09:55:33 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:34.508 09:55:33 -- common/autotest_common.sh@914 -- # local i=0 00:08:34.508 09:55:33 -- common/autotest_common.sh@915 -- # local force 00:08:34.508 09:55:33 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:34.508 09:55:33 -- common/autotest_common.sh@920 -- # force=-f 00:08:34.508 09:55:33 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:34.767 btrfs-progs v6.8.1 00:08:34.767 See https://btrfs.readthedocs.io for more information. 00:08:34.767 00:08:34.767 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:34.767 NOTE: several default settings have changed in version 5.15, please make sure 00:08:34.767 this does not affect your deployments: 00:08:34.767 - DUP for metadata (-m dup) 00:08:34.767 - enabled no-holes (-O no-holes) 00:08:34.767 - enabled free-space-tree (-R free-space-tree) 00:08:34.767 00:08:34.767 Label: (null) 00:08:34.767 UUID: 76c46692-a4fa-4f9c-9231-5ee1a972f007 00:08:34.767 Node size: 16384 00:08:34.767 Sector size: 4096 (CPU page size: 4096) 00:08:34.767 Filesystem size: 510.00MiB 00:08:34.767 Block group profiles: 00:08:34.767 Data: single 8.00MiB 00:08:34.767 Metadata: DUP 32.00MiB 00:08:34.767 System: DUP 8.00MiB 00:08:34.767 SSD detected: yes 00:08:34.767 Zoned device: no 00:08:34.767 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:34.767 Checksum: crc32c 00:08:34.767 Number of devices: 1 00:08:34.767 Devices: 00:08:34.767 ID SIZE PATH 00:08:34.767 1 510.00MiB /dev/nvme0n1p1 00:08:34.767 00:08:34.767 09:55:33 -- common/autotest_common.sh@931 -- # return 0 00:08:34.767 09:55:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:34.767 09:55:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:34.767 09:55:33 -- target/filesystem.sh@25 -- # sync 00:08:34.767 09:55:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:34.767 09:55:33 -- target/filesystem.sh@27 -- # sync 00:08:34.767 09:55:33 -- target/filesystem.sh@29 -- # i=0 00:08:34.767 09:55:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:34.767 09:55:33 -- target/filesystem.sh@37 -- # kill -0 72904 00:08:34.767 09:55:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:34.767 09:55:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:34.767 09:55:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:34.767 09:55:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:34.767 00:08:34.767 real 0m0.247s 00:08:34.767 user 0m0.017s 00:08:34.767 sys 0m0.067s 00:08:34.767 09:55:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.767 09:55:33 -- common/autotest_common.sh@10 -- # set +x 00:08:34.767 ************************************ 00:08:34.767 END TEST filesystem_in_capsule_btrfs 00:08:34.767 ************************************ 00:08:34.767 09:55:33 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:34.767 09:55:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:34.767 09:55:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.767 09:55:33 -- common/autotest_common.sh@10 -- # set +x 00:08:34.767 ************************************ 00:08:34.767 START TEST filesystem_in_capsule_xfs 00:08:34.767 ************************************ 00:08:34.767 09:55:33 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:34.767 09:55:33 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:34.767 09:55:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:34.767 09:55:33 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:34.767 09:55:33 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:34.767 09:55:33 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:34.767 09:55:33 -- common/autotest_common.sh@914 -- # local i=0 00:08:34.767 09:55:33 -- common/autotest_common.sh@915 -- # local force 00:08:34.767 09:55:33 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:34.767 09:55:33 -- common/autotest_common.sh@920 -- # force=-f 00:08:34.767 09:55:33 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:35.026 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:35.026 = sectsz=512 attr=2, projid32bit=1 00:08:35.026 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:35.026 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:35.026 data = bsize=4096 blocks=130560, imaxpct=25 00:08:35.026 = sunit=0 swidth=0 blks 00:08:35.026 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:35.026 log =internal log bsize=4096 blocks=16384, version=2 00:08:35.026 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:35.026 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:35.594 Discarding blocks...Done. 00:08:35.594 09:55:34 -- common/autotest_common.sh@931 -- # return 0 00:08:35.594 09:55:34 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:37.496 09:55:35 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:37.496 09:55:35 -- target/filesystem.sh@25 -- # sync 00:08:37.496 09:55:35 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:37.496 09:55:35 -- target/filesystem.sh@27 -- # sync 00:08:37.496 09:55:35 -- target/filesystem.sh@29 -- # i=0 00:08:37.496 09:55:35 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:37.496 09:55:35 -- target/filesystem.sh@37 -- # kill -0 72904 00:08:37.496 09:55:35 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:37.496 09:55:35 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:37.496 09:55:35 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:37.496 09:55:35 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:37.496 00:08:37.496 real 0m2.623s 00:08:37.496 user 0m0.019s 00:08:37.496 sys 0m0.059s 00:08:37.496 09:55:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.496 09:55:35 -- common/autotest_common.sh@10 -- # set +x 00:08:37.496 ************************************ 00:08:37.496 END TEST filesystem_in_capsule_xfs 00:08:37.496 ************************************ 00:08:37.496 09:55:36 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:37.496 09:55:36 -- target/filesystem.sh@93 -- # sync 00:08:37.496 09:55:36 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:37.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.496 09:55:36 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:37.496 09:55:36 -- common/autotest_common.sh@1208 -- # local i=0 00:08:37.496 09:55:36 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:37.496 09:55:36 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.496 09:55:36 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:37.496 09:55:36 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.496 09:55:36 -- common/autotest_common.sh@1220 -- # return 0 00:08:37.496 09:55:36 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.496 09:55:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.496 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:08:37.496 09:55:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.496 09:55:36 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:37.496 09:55:36 -- target/filesystem.sh@101 -- # killprocess 72904 00:08:37.496 09:55:36 -- common/autotest_common.sh@936 -- # '[' -z 72904 ']' 00:08:37.496 09:55:36 -- common/autotest_common.sh@940 -- # kill -0 72904 00:08:37.496 09:55:36 -- common/autotest_common.sh@941 -- # uname 00:08:37.755 09:55:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:37.755 09:55:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72904 00:08:37.755 09:55:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:37.755 09:55:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:37.755 killing process with pid 72904 00:08:37.755 09:55:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72904' 00:08:37.755 09:55:36 -- common/autotest_common.sh@955 -- # kill 72904 00:08:37.755 09:55:36 -- common/autotest_common.sh@960 -- # wait 72904 00:08:38.014 09:55:36 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:38.014 00:08:38.014 real 0m13.909s 00:08:38.014 user 0m53.274s 00:08:38.014 sys 0m2.125s 00:08:38.014 09:55:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:38.014 ************************************ 00:08:38.014 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:08:38.014 END TEST nvmf_filesystem_in_capsule 00:08:38.014 ************************************ 00:08:38.014 09:55:36 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:38.014 09:55:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:38.014 09:55:36 -- nvmf/common.sh@116 -- # sync 00:08:38.273 09:55:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:38.273 09:55:36 -- nvmf/common.sh@119 -- # set +e 00:08:38.273 09:55:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:38.273 09:55:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:38.273 rmmod nvme_tcp 00:08:38.273 rmmod nvme_fabrics 00:08:38.273 rmmod nvme_keyring 00:08:38.273 09:55:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:38.273 09:55:36 -- nvmf/common.sh@123 -- # set -e 00:08:38.273 09:55:36 -- nvmf/common.sh@124 -- # return 0 00:08:38.273 09:55:36 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:38.273 09:55:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:38.273 09:55:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:38.273 09:55:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:38.273 09:55:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.273 09:55:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:38.273 09:55:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.273 09:55:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.273 09:55:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.273 09:55:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:38.273 00:08:38.273 real 0m29.325s 00:08:38.273 user 1m49.057s 00:08:38.273 sys 0m4.643s 00:08:38.273 09:55:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:38.273 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:08:38.273 ************************************ 00:08:38.273 END TEST nvmf_filesystem 00:08:38.273 ************************************ 00:08:38.273 09:55:36 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:38.273 09:55:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:38.274 09:55:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.274 09:55:36 -- common/autotest_common.sh@10 -- # set +x 00:08:38.274 ************************************ 00:08:38.274 START TEST nvmf_discovery 00:08:38.274 ************************************ 00:08:38.274 09:55:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:38.274 * Looking for test storage... 00:08:38.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:38.274 09:55:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:38.274 09:55:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:38.274 09:55:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:38.533 09:55:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:38.533 09:55:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:38.533 09:55:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:38.533 09:55:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:38.533 09:55:36 -- scripts/common.sh@335 -- # IFS=.-: 00:08:38.533 09:55:36 -- scripts/common.sh@335 -- # read -ra ver1 00:08:38.533 09:55:36 -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.533 09:55:36 -- scripts/common.sh@336 -- # read -ra ver2 00:08:38.533 09:55:36 -- scripts/common.sh@337 -- # local 'op=<' 00:08:38.533 09:55:36 -- scripts/common.sh@339 -- # ver1_l=2 00:08:38.533 09:55:36 -- scripts/common.sh@340 -- # ver2_l=1 00:08:38.533 09:55:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:38.533 09:55:36 -- scripts/common.sh@343 -- # case "$op" in 00:08:38.533 09:55:36 -- scripts/common.sh@344 -- # : 1 00:08:38.533 09:55:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:38.533 09:55:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.533 09:55:36 -- scripts/common.sh@364 -- # decimal 1 00:08:38.533 09:55:36 -- scripts/common.sh@352 -- # local d=1 00:08:38.533 09:55:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.533 09:55:36 -- scripts/common.sh@354 -- # echo 1 00:08:38.533 09:55:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:38.533 09:55:36 -- scripts/common.sh@365 -- # decimal 2 00:08:38.533 09:55:36 -- scripts/common.sh@352 -- # local d=2 00:08:38.533 09:55:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.533 09:55:36 -- scripts/common.sh@354 -- # echo 2 00:08:38.533 09:55:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:38.533 09:55:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:38.533 09:55:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:38.533 09:55:36 -- scripts/common.sh@367 -- # return 0 00:08:38.533 09:55:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.533 09:55:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:38.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.533 --rc genhtml_branch_coverage=1 00:08:38.533 --rc genhtml_function_coverage=1 00:08:38.533 --rc genhtml_legend=1 00:08:38.533 --rc geninfo_all_blocks=1 00:08:38.533 --rc geninfo_unexecuted_blocks=1 00:08:38.533 00:08:38.533 ' 00:08:38.533 09:55:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:38.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.533 --rc genhtml_branch_coverage=1 00:08:38.533 --rc genhtml_function_coverage=1 00:08:38.533 --rc genhtml_legend=1 00:08:38.533 --rc geninfo_all_blocks=1 00:08:38.533 --rc geninfo_unexecuted_blocks=1 00:08:38.533 00:08:38.533 ' 00:08:38.533 09:55:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:38.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.533 --rc genhtml_branch_coverage=1 00:08:38.533 --rc genhtml_function_coverage=1 00:08:38.533 --rc genhtml_legend=1 00:08:38.533 --rc geninfo_all_blocks=1 00:08:38.533 --rc geninfo_unexecuted_blocks=1 00:08:38.533 00:08:38.533 ' 00:08:38.533 09:55:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:38.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.533 --rc genhtml_branch_coverage=1 00:08:38.533 --rc genhtml_function_coverage=1 00:08:38.533 --rc genhtml_legend=1 00:08:38.533 --rc geninfo_all_blocks=1 00:08:38.533 --rc geninfo_unexecuted_blocks=1 00:08:38.533 00:08:38.533 ' 00:08:38.533 09:55:36 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:38.533 09:55:36 -- nvmf/common.sh@7 -- # uname -s 00:08:38.533 09:55:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.533 09:55:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.533 09:55:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.533 09:55:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.533 09:55:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.533 09:55:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.533 09:55:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.533 09:55:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.533 09:55:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.533 09:55:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.533 09:55:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:08:38.533 09:55:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:08:38.533 09:55:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.533 09:55:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.533 09:55:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:38.533 09:55:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.533 09:55:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.533 09:55:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.533 09:55:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.533 09:55:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.533 09:55:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.533 09:55:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.533 09:55:36 -- paths/export.sh@5 -- # export PATH 00:08:38.533 09:55:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.533 09:55:36 -- nvmf/common.sh@46 -- # : 0 00:08:38.533 09:55:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:38.533 09:55:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:38.533 09:55:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:38.533 09:55:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.533 09:55:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.533 09:55:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:38.533 09:55:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:38.533 09:55:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:38.533 09:55:36 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:38.533 09:55:36 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:38.533 09:55:36 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:38.533 09:55:36 -- target/discovery.sh@15 -- # hash nvme 00:08:38.533 09:55:36 -- target/discovery.sh@20 -- # nvmftestinit 00:08:38.533 09:55:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:38.533 09:55:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.533 09:55:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:38.533 09:55:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:38.533 09:55:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:38.533 09:55:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.533 09:55:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.533 09:55:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.533 09:55:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:38.533 09:55:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:38.533 09:55:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:38.533 09:55:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:38.533 09:55:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:38.533 09:55:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:38.533 09:55:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.533 09:55:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.533 09:55:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:38.533 09:55:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:38.533 09:55:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:38.533 09:55:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:38.533 09:55:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:38.533 09:55:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.533 09:55:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:38.533 09:55:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:38.533 09:55:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:38.533 09:55:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:38.533 09:55:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:38.533 09:55:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:38.533 Cannot find device "nvmf_tgt_br" 00:08:38.534 09:55:37 -- nvmf/common.sh@154 -- # true 00:08:38.534 09:55:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:38.534 Cannot find device "nvmf_tgt_br2" 00:08:38.534 09:55:37 -- nvmf/common.sh@155 -- # true 00:08:38.534 09:55:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:38.534 09:55:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:38.534 Cannot find device "nvmf_tgt_br" 00:08:38.534 09:55:37 -- nvmf/common.sh@157 -- # true 00:08:38.534 09:55:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:38.534 Cannot find device "nvmf_tgt_br2" 00:08:38.534 09:55:37 -- nvmf/common.sh@158 -- # true 00:08:38.534 09:55:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:38.534 09:55:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:38.534 09:55:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:38.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.534 09:55:37 -- nvmf/common.sh@161 -- # true 00:08:38.534 09:55:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:38.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.534 09:55:37 -- nvmf/common.sh@162 -- # true 00:08:38.534 09:55:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:38.534 09:55:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:38.534 09:55:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:38.792 09:55:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:38.792 09:55:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:38.792 09:55:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:38.792 09:55:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:38.792 09:55:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:38.792 09:55:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:38.792 09:55:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:38.792 09:55:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:38.792 09:55:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:38.792 09:55:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:38.792 09:55:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:38.792 09:55:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:38.793 09:55:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:38.793 09:55:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:38.793 09:55:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:38.793 09:55:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:38.793 09:55:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:38.793 09:55:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:38.793 09:55:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:38.793 09:55:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:38.793 09:55:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:38.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:08:38.793 00:08:38.793 --- 10.0.0.2 ping statistics --- 00:08:38.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.793 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:08:38.793 09:55:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:38.793 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:38.793 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:08:38.793 00:08:38.793 --- 10.0.0.3 ping statistics --- 00:08:38.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.793 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:08:38.793 09:55:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:38.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:08:38.793 00:08:38.793 --- 10.0.0.1 ping statistics --- 00:08:38.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.793 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:38.793 09:55:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.793 09:55:37 -- nvmf/common.sh@421 -- # return 0 00:08:38.793 09:55:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:38.793 09:55:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.793 09:55:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:38.793 09:55:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:38.793 09:55:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.793 09:55:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:38.793 09:55:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:38.793 09:55:37 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:38.793 09:55:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:38.793 09:55:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.793 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:08:38.793 09:55:37 -- nvmf/common.sh@469 -- # nvmfpid=73445 00:08:38.793 09:55:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.793 09:55:37 -- nvmf/common.sh@470 -- # waitforlisten 73445 00:08:38.793 09:55:37 -- common/autotest_common.sh@829 -- # '[' -z 73445 ']' 00:08:38.793 09:55:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.793 09:55:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.793 09:55:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.793 09:55:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.793 09:55:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.051 [2024-12-16 09:55:37.436926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:39.051 [2024-12-16 09:55:37.437018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.051 [2024-12-16 09:55:37.581669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.051 [2024-12-16 09:55:37.667245] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:39.051 [2024-12-16 09:55:37.667742] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.051 [2024-12-16 09:55:37.667890] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.052 [2024-12-16 09:55:37.668078] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.052 [2024-12-16 09:55:37.668322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.052 [2024-12-16 09:55:37.668503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.052 [2024-12-16 09:55:37.668654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.052 [2024-12-16 09:55:37.668569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.988 09:55:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.988 09:55:38 -- common/autotest_common.sh@862 -- # return 0 00:08:39.988 09:55:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:39.988 09:55:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.988 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 09:55:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.988 09:55:38 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.988 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.988 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 [2024-12-16 09:55:38.489591] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.988 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.988 09:55:38 -- target/discovery.sh@26 -- # seq 1 4 00:08:39.988 09:55:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:39.988 09:55:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:39.988 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.988 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 Null1 00:08:39.988 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.988 09:55:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:39.988 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.988 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.988 09:55:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:39.988 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.988 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.988 09:55:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.988 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.988 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 [2024-12-16 09:55:38.546892] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.988 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.988 09:55:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:39.988 09:55:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:39.988 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.988 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 Null2 00:08:39.988 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.988 09:55:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:39.988 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.988 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.988 09:55:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:39.988 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.988 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.988 09:55:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:39.988 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.988 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.988 09:55:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:39.988 09:55:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:39.988 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.988 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 Null3 00:08:39.988 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.988 09:55:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:39.988 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.988 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.988 09:55:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:39.988 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.988 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:39.988 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.988 09:55:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:39.988 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.988 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.247 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.247 09:55:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:40.247 09:55:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:40.247 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.247 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.247 Null4 00:08:40.247 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.247 09:55:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:40.247 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.247 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.247 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.247 09:55:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:40.247 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.247 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.247 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.247 09:55:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:40.247 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.247 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.247 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.247 09:55:38 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:40.247 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.247 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.247 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.247 09:55:38 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:40.247 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.247 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.247 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.247 09:55:38 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -a 10.0.0.2 -s 4420 00:08:40.247 00:08:40.247 Discovery Log Number of Records 6, Generation counter 6 00:08:40.247 =====Discovery Log Entry 0====== 00:08:40.247 trtype: tcp 00:08:40.247 adrfam: ipv4 00:08:40.247 subtype: current discovery subsystem 00:08:40.247 treq: not required 00:08:40.247 portid: 0 00:08:40.247 trsvcid: 4420 00:08:40.247 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:40.247 traddr: 10.0.0.2 00:08:40.247 eflags: explicit discovery connections, duplicate discovery information 00:08:40.247 sectype: none 00:08:40.247 =====Discovery Log Entry 1====== 00:08:40.247 trtype: tcp 00:08:40.247 adrfam: ipv4 00:08:40.247 subtype: nvme subsystem 00:08:40.247 treq: not required 00:08:40.247 portid: 0 00:08:40.247 trsvcid: 4420 00:08:40.247 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:40.247 traddr: 10.0.0.2 00:08:40.247 eflags: none 00:08:40.247 sectype: none 00:08:40.247 =====Discovery Log Entry 2====== 00:08:40.247 trtype: tcp 00:08:40.247 adrfam: ipv4 00:08:40.247 subtype: nvme subsystem 00:08:40.247 treq: not required 00:08:40.247 portid: 0 00:08:40.247 trsvcid: 4420 00:08:40.247 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:40.247 traddr: 10.0.0.2 00:08:40.247 eflags: none 00:08:40.247 sectype: none 00:08:40.247 =====Discovery Log Entry 3====== 00:08:40.247 trtype: tcp 00:08:40.247 adrfam: ipv4 00:08:40.247 subtype: nvme subsystem 00:08:40.247 treq: not required 00:08:40.247 portid: 0 00:08:40.247 trsvcid: 4420 00:08:40.247 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:40.247 traddr: 10.0.0.2 00:08:40.247 eflags: none 00:08:40.247 sectype: none 00:08:40.247 =====Discovery Log Entry 4====== 00:08:40.247 trtype: tcp 00:08:40.247 adrfam: ipv4 00:08:40.247 subtype: nvme subsystem 00:08:40.247 treq: not required 00:08:40.247 portid: 0 00:08:40.247 trsvcid: 4420 00:08:40.247 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:40.247 traddr: 10.0.0.2 00:08:40.247 eflags: none 00:08:40.247 sectype: none 00:08:40.247 =====Discovery Log Entry 5====== 00:08:40.247 trtype: tcp 00:08:40.247 adrfam: ipv4 00:08:40.247 subtype: discovery subsystem referral 00:08:40.247 treq: not required 00:08:40.247 portid: 0 00:08:40.247 trsvcid: 4430 00:08:40.247 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:40.247 traddr: 10.0.0.2 00:08:40.247 eflags: none 00:08:40.248 sectype: none 00:08:40.248 Perform nvmf subsystem discovery via RPC 00:08:40.248 09:55:38 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:40.248 09:55:38 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:40.248 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.248 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.248 [2024-12-16 09:55:38.774885] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:40.248 [ 00:08:40.248 { 00:08:40.248 "allow_any_host": true, 00:08:40.248 "hosts": [], 00:08:40.248 "listen_addresses": [ 00:08:40.248 { 00:08:40.248 "adrfam": "IPv4", 00:08:40.248 "traddr": "10.0.0.2", 00:08:40.248 "transport": "TCP", 00:08:40.248 "trsvcid": "4420", 00:08:40.248 "trtype": "TCP" 00:08:40.248 } 00:08:40.248 ], 00:08:40.248 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:40.248 "subtype": "Discovery" 00:08:40.248 }, 00:08:40.248 { 00:08:40.248 "allow_any_host": true, 00:08:40.248 "hosts": [], 00:08:40.248 "listen_addresses": [ 00:08:40.248 { 00:08:40.248 "adrfam": "IPv4", 00:08:40.248 "traddr": "10.0.0.2", 00:08:40.248 "transport": "TCP", 00:08:40.248 "trsvcid": "4420", 00:08:40.248 "trtype": "TCP" 00:08:40.248 } 00:08:40.248 ], 00:08:40.248 "max_cntlid": 65519, 00:08:40.248 "max_namespaces": 32, 00:08:40.248 "min_cntlid": 1, 00:08:40.248 "model_number": "SPDK bdev Controller", 00:08:40.248 "namespaces": [ 00:08:40.248 { 00:08:40.248 "bdev_name": "Null1", 00:08:40.248 "name": "Null1", 00:08:40.248 "nguid": "4D515D7A758346FE8BC5996971E912AD", 00:08:40.248 "nsid": 1, 00:08:40.248 "uuid": "4d515d7a-7583-46fe-8bc5-996971e912ad" 00:08:40.248 } 00:08:40.248 ], 00:08:40.248 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.248 "serial_number": "SPDK00000000000001", 00:08:40.248 "subtype": "NVMe" 00:08:40.248 }, 00:08:40.248 { 00:08:40.248 "allow_any_host": true, 00:08:40.248 "hosts": [], 00:08:40.248 "listen_addresses": [ 00:08:40.248 { 00:08:40.248 "adrfam": "IPv4", 00:08:40.248 "traddr": "10.0.0.2", 00:08:40.248 "transport": "TCP", 00:08:40.248 "trsvcid": "4420", 00:08:40.248 "trtype": "TCP" 00:08:40.248 } 00:08:40.248 ], 00:08:40.248 "max_cntlid": 65519, 00:08:40.248 "max_namespaces": 32, 00:08:40.248 "min_cntlid": 1, 00:08:40.248 "model_number": "SPDK bdev Controller", 00:08:40.248 "namespaces": [ 00:08:40.248 { 00:08:40.248 "bdev_name": "Null2", 00:08:40.248 "name": "Null2", 00:08:40.248 "nguid": "817B8D1EE5394E3485E2BC58ECD6AC2F", 00:08:40.248 "nsid": 1, 00:08:40.248 "uuid": "817b8d1e-e539-4e34-85e2-bc58ecd6ac2f" 00:08:40.248 } 00:08:40.248 ], 00:08:40.248 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:40.248 "serial_number": "SPDK00000000000002", 00:08:40.248 "subtype": "NVMe" 00:08:40.248 }, 00:08:40.248 { 00:08:40.248 "allow_any_host": true, 00:08:40.248 "hosts": [], 00:08:40.248 "listen_addresses": [ 00:08:40.248 { 00:08:40.248 "adrfam": "IPv4", 00:08:40.248 "traddr": "10.0.0.2", 00:08:40.248 "transport": "TCP", 00:08:40.248 "trsvcid": "4420", 00:08:40.248 "trtype": "TCP" 00:08:40.248 } 00:08:40.248 ], 00:08:40.248 "max_cntlid": 65519, 00:08:40.248 "max_namespaces": 32, 00:08:40.248 "min_cntlid": 1, 00:08:40.248 "model_number": "SPDK bdev Controller", 00:08:40.248 "namespaces": [ 00:08:40.248 { 00:08:40.248 "bdev_name": "Null3", 00:08:40.248 "name": "Null3", 00:08:40.248 "nguid": "5F6A210F9D164C0BBC62D0096D7CC779", 00:08:40.248 "nsid": 1, 00:08:40.248 "uuid": "5f6a210f-9d16-4c0b-bc62-d0096d7cc779" 00:08:40.248 } 00:08:40.248 ], 00:08:40.248 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:40.248 "serial_number": "SPDK00000000000003", 00:08:40.248 "subtype": "NVMe" 00:08:40.248 }, 00:08:40.248 { 00:08:40.248 "allow_any_host": true, 00:08:40.248 "hosts": [], 00:08:40.248 "listen_addresses": [ 00:08:40.248 { 00:08:40.248 "adrfam": "IPv4", 00:08:40.248 "traddr": "10.0.0.2", 00:08:40.248 "transport": "TCP", 00:08:40.248 "trsvcid": "4420", 00:08:40.248 "trtype": "TCP" 00:08:40.248 } 00:08:40.248 ], 00:08:40.248 "max_cntlid": 65519, 00:08:40.248 "max_namespaces": 32, 00:08:40.248 "min_cntlid": 1, 00:08:40.248 "model_number": "SPDK bdev Controller", 00:08:40.248 "namespaces": [ 00:08:40.248 { 00:08:40.248 "bdev_name": "Null4", 00:08:40.248 "name": "Null4", 00:08:40.248 "nguid": "863B679F825E4233AA2A9D9E42B088F6", 00:08:40.248 "nsid": 1, 00:08:40.248 "uuid": "863b679f-825e-4233-aa2a-9d9e42b088f6" 00:08:40.248 } 00:08:40.248 ], 00:08:40.248 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:40.248 "serial_number": "SPDK00000000000004", 00:08:40.248 "subtype": "NVMe" 00:08:40.248 } 00:08:40.248 ] 00:08:40.248 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.248 09:55:38 -- target/discovery.sh@42 -- # seq 1 4 00:08:40.248 09:55:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:40.248 09:55:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.248 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.248 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.248 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.248 09:55:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:40.248 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.248 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.248 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.248 09:55:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:40.248 09:55:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:40.248 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.248 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.248 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.248 09:55:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:40.248 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.248 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.248 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.248 09:55:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:40.248 09:55:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:40.248 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.248 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.248 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.248 09:55:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:40.248 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.248 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.248 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.248 09:55:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:40.248 09:55:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:40.248 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.248 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.507 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.507 09:55:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:40.507 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.507 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.507 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.507 09:55:38 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:40.507 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.507 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.507 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.507 09:55:38 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:40.507 09:55:38 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:40.507 09:55:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.507 09:55:38 -- common/autotest_common.sh@10 -- # set +x 00:08:40.507 09:55:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.507 09:55:38 -- target/discovery.sh@49 -- # check_bdevs= 00:08:40.507 09:55:38 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:40.507 09:55:38 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:40.507 09:55:38 -- target/discovery.sh@57 -- # nvmftestfini 00:08:40.507 09:55:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:40.507 09:55:38 -- nvmf/common.sh@116 -- # sync 00:08:40.507 09:55:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:40.507 09:55:38 -- nvmf/common.sh@119 -- # set +e 00:08:40.507 09:55:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:40.507 09:55:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:40.507 rmmod nvme_tcp 00:08:40.507 rmmod nvme_fabrics 00:08:40.507 rmmod nvme_keyring 00:08:40.507 09:55:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:40.507 09:55:39 -- nvmf/common.sh@123 -- # set -e 00:08:40.507 09:55:39 -- nvmf/common.sh@124 -- # return 0 00:08:40.507 09:55:39 -- nvmf/common.sh@477 -- # '[' -n 73445 ']' 00:08:40.507 09:55:39 -- nvmf/common.sh@478 -- # killprocess 73445 00:08:40.507 09:55:39 -- common/autotest_common.sh@936 -- # '[' -z 73445 ']' 00:08:40.507 09:55:39 -- common/autotest_common.sh@940 -- # kill -0 73445 00:08:40.507 09:55:39 -- common/autotest_common.sh@941 -- # uname 00:08:40.507 09:55:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:40.507 09:55:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73445 00:08:40.507 09:55:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:40.507 09:55:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:40.507 killing process with pid 73445 00:08:40.507 09:55:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73445' 00:08:40.507 09:55:39 -- common/autotest_common.sh@955 -- # kill 73445 00:08:40.507 [2024-12-16 09:55:39.054970] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:40.507 09:55:39 -- common/autotest_common.sh@960 -- # wait 73445 00:08:40.765 09:55:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:40.765 09:55:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:40.765 09:55:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:40.765 09:55:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.765 09:55:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:40.765 09:55:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.765 09:55:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.765 09:55:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.765 09:55:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:40.765 00:08:40.765 real 0m2.522s 00:08:40.765 user 0m6.808s 00:08:40.765 sys 0m0.665s 00:08:40.765 09:55:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:40.765 09:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:40.765 ************************************ 00:08:40.765 END TEST nvmf_discovery 00:08:40.765 ************************************ 00:08:40.765 09:55:39 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:40.765 09:55:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:40.765 09:55:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.765 09:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:40.765 ************************************ 00:08:40.765 START TEST nvmf_referrals 00:08:40.765 ************************************ 00:08:40.765 09:55:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:41.024 * Looking for test storage... 00:08:41.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:41.024 09:55:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:41.024 09:55:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:41.024 09:55:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:41.024 09:55:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:41.024 09:55:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:41.024 09:55:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:41.024 09:55:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:41.024 09:55:39 -- scripts/common.sh@335 -- # IFS=.-: 00:08:41.024 09:55:39 -- scripts/common.sh@335 -- # read -ra ver1 00:08:41.024 09:55:39 -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.024 09:55:39 -- scripts/common.sh@336 -- # read -ra ver2 00:08:41.024 09:55:39 -- scripts/common.sh@337 -- # local 'op=<' 00:08:41.024 09:55:39 -- scripts/common.sh@339 -- # ver1_l=2 00:08:41.024 09:55:39 -- scripts/common.sh@340 -- # ver2_l=1 00:08:41.024 09:55:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:41.024 09:55:39 -- scripts/common.sh@343 -- # case "$op" in 00:08:41.024 09:55:39 -- scripts/common.sh@344 -- # : 1 00:08:41.024 09:55:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:41.024 09:55:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.024 09:55:39 -- scripts/common.sh@364 -- # decimal 1 00:08:41.024 09:55:39 -- scripts/common.sh@352 -- # local d=1 00:08:41.024 09:55:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.024 09:55:39 -- scripts/common.sh@354 -- # echo 1 00:08:41.024 09:55:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:41.024 09:55:39 -- scripts/common.sh@365 -- # decimal 2 00:08:41.024 09:55:39 -- scripts/common.sh@352 -- # local d=2 00:08:41.024 09:55:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.024 09:55:39 -- scripts/common.sh@354 -- # echo 2 00:08:41.024 09:55:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:41.024 09:55:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:41.024 09:55:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:41.024 09:55:39 -- scripts/common.sh@367 -- # return 0 00:08:41.024 09:55:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.024 09:55:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:41.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.024 --rc genhtml_branch_coverage=1 00:08:41.024 --rc genhtml_function_coverage=1 00:08:41.024 --rc genhtml_legend=1 00:08:41.024 --rc geninfo_all_blocks=1 00:08:41.024 --rc geninfo_unexecuted_blocks=1 00:08:41.024 00:08:41.024 ' 00:08:41.024 09:55:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:41.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.024 --rc genhtml_branch_coverage=1 00:08:41.024 --rc genhtml_function_coverage=1 00:08:41.024 --rc genhtml_legend=1 00:08:41.024 --rc geninfo_all_blocks=1 00:08:41.024 --rc geninfo_unexecuted_blocks=1 00:08:41.024 00:08:41.024 ' 00:08:41.024 09:55:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:41.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.024 --rc genhtml_branch_coverage=1 00:08:41.024 --rc genhtml_function_coverage=1 00:08:41.024 --rc genhtml_legend=1 00:08:41.024 --rc geninfo_all_blocks=1 00:08:41.024 --rc geninfo_unexecuted_blocks=1 00:08:41.024 00:08:41.024 ' 00:08:41.024 09:55:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:41.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.024 --rc genhtml_branch_coverage=1 00:08:41.024 --rc genhtml_function_coverage=1 00:08:41.024 --rc genhtml_legend=1 00:08:41.024 --rc geninfo_all_blocks=1 00:08:41.024 --rc geninfo_unexecuted_blocks=1 00:08:41.024 00:08:41.024 ' 00:08:41.024 09:55:39 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:41.024 09:55:39 -- nvmf/common.sh@7 -- # uname -s 00:08:41.024 09:55:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.024 09:55:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.024 09:55:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.024 09:55:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.024 09:55:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.024 09:55:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.024 09:55:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.024 09:55:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.024 09:55:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.024 09:55:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.024 09:55:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:08:41.024 09:55:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:08:41.024 09:55:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.025 09:55:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.025 09:55:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:41.025 09:55:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.025 09:55:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.025 09:55:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.025 09:55:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.025 09:55:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.025 09:55:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.025 09:55:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.025 09:55:39 -- paths/export.sh@5 -- # export PATH 00:08:41.025 09:55:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.025 09:55:39 -- nvmf/common.sh@46 -- # : 0 00:08:41.025 09:55:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:41.025 09:55:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:41.025 09:55:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:41.025 09:55:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.025 09:55:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.025 09:55:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:41.025 09:55:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:41.025 09:55:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:41.025 09:55:39 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:41.025 09:55:39 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:41.025 09:55:39 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:41.025 09:55:39 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:41.025 09:55:39 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:41.025 09:55:39 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:41.025 09:55:39 -- target/referrals.sh@37 -- # nvmftestinit 00:08:41.025 09:55:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:41.025 09:55:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.025 09:55:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:41.025 09:55:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:41.025 09:55:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:41.025 09:55:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.025 09:55:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.025 09:55:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.025 09:55:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:41.025 09:55:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:41.025 09:55:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:41.025 09:55:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:41.025 09:55:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:41.025 09:55:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:41.025 09:55:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.025 09:55:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.025 09:55:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:41.025 09:55:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:41.025 09:55:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:41.025 09:55:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:41.025 09:55:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:41.025 09:55:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.025 09:55:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:41.025 09:55:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:41.025 09:55:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:41.025 09:55:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:41.025 09:55:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:41.025 09:55:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:41.025 Cannot find device "nvmf_tgt_br" 00:08:41.025 09:55:39 -- nvmf/common.sh@154 -- # true 00:08:41.025 09:55:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:41.284 Cannot find device "nvmf_tgt_br2" 00:08:41.284 09:55:39 -- nvmf/common.sh@155 -- # true 00:08:41.284 09:55:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:41.284 09:55:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:41.284 Cannot find device "nvmf_tgt_br" 00:08:41.284 09:55:39 -- nvmf/common.sh@157 -- # true 00:08:41.284 09:55:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:41.284 Cannot find device "nvmf_tgt_br2" 00:08:41.284 09:55:39 -- nvmf/common.sh@158 -- # true 00:08:41.284 09:55:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:41.284 09:55:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:41.284 09:55:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:41.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.284 09:55:39 -- nvmf/common.sh@161 -- # true 00:08:41.284 09:55:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:41.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.284 09:55:39 -- nvmf/common.sh@162 -- # true 00:08:41.284 09:55:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:41.284 09:55:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:41.284 09:55:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:41.284 09:55:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:41.284 09:55:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:41.284 09:55:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:41.284 09:55:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:41.284 09:55:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:41.284 09:55:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:41.284 09:55:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:41.284 09:55:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:41.284 09:55:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:41.284 09:55:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:41.284 09:55:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:41.284 09:55:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:41.284 09:55:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:41.284 09:55:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:41.284 09:55:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:41.284 09:55:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:41.284 09:55:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:41.284 09:55:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:41.543 09:55:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:41.543 09:55:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:41.543 09:55:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:41.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:08:41.543 00:08:41.543 --- 10.0.0.2 ping statistics --- 00:08:41.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.543 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:41.543 09:55:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:41.543 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:41.543 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:08:41.543 00:08:41.543 --- 10.0.0.3 ping statistics --- 00:08:41.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.543 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:41.543 09:55:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:41.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:41.543 00:08:41.543 --- 10.0.0.1 ping statistics --- 00:08:41.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.543 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:41.543 09:55:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.543 09:55:39 -- nvmf/common.sh@421 -- # return 0 00:08:41.543 09:55:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:41.543 09:55:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.543 09:55:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:41.543 09:55:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:41.543 09:55:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.543 09:55:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:41.543 09:55:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:41.543 09:55:39 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:41.543 09:55:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:41.543 09:55:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:41.543 09:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:41.543 09:55:39 -- nvmf/common.sh@469 -- # nvmfpid=73686 00:08:41.543 09:55:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:41.543 09:55:39 -- nvmf/common.sh@470 -- # waitforlisten 73686 00:08:41.543 09:55:39 -- common/autotest_common.sh@829 -- # '[' -z 73686 ']' 00:08:41.543 09:55:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.543 09:55:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.543 09:55:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.543 09:55:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.543 09:55:39 -- common/autotest_common.sh@10 -- # set +x 00:08:41.543 [2024-12-16 09:55:40.000997] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:41.543 [2024-12-16 09:55:40.001073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.543 [2024-12-16 09:55:40.143469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.802 [2024-12-16 09:55:40.222511] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:41.802 [2024-12-16 09:55:40.222685] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.802 [2024-12-16 09:55:40.222711] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.803 [2024-12-16 09:55:40.222737] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.803 [2024-12-16 09:55:40.222896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.803 [2024-12-16 09:55:40.223184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.803 [2024-12-16 09:55:40.223528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.803 [2024-12-16 09:55:40.223537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.739 09:55:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.739 09:55:41 -- common/autotest_common.sh@862 -- # return 0 00:08:42.739 09:55:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:42.739 09:55:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:42.739 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:42.739 09:55:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.739 09:55:41 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:42.739 09:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.739 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:42.739 [2024-12-16 09:55:41.122714] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.739 09:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.739 09:55:41 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:42.739 09:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.739 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:42.739 [2024-12-16 09:55:41.144900] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:42.739 09:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.739 09:55:41 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:42.739 09:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.739 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:42.739 09:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.739 09:55:41 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:42.739 09:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.739 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:42.739 09:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.739 09:55:41 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:42.739 09:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.739 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:42.739 09:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.739 09:55:41 -- target/referrals.sh@48 -- # jq length 00:08:42.739 09:55:41 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:42.739 09:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.739 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:42.739 09:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.739 09:55:41 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:42.739 09:55:41 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:42.739 09:55:41 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:42.739 09:55:41 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:42.739 09:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.739 09:55:41 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:42.739 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:42.739 09:55:41 -- target/referrals.sh@21 -- # sort 00:08:42.739 09:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.739 09:55:41 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:42.739 09:55:41 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:42.739 09:55:41 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:42.739 09:55:41 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:42.739 09:55:41 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:42.739 09:55:41 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:42.739 09:55:41 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:42.739 09:55:41 -- target/referrals.sh@26 -- # sort 00:08:42.998 09:55:41 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:42.998 09:55:41 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:42.998 09:55:41 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:42.998 09:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.998 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:42.998 09:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.998 09:55:41 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:42.998 09:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.998 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:42.998 09:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.998 09:55:41 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:42.998 09:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.998 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:42.998 09:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.998 09:55:41 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:42.998 09:55:41 -- target/referrals.sh@56 -- # jq length 00:08:42.998 09:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.998 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:42.998 09:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.998 09:55:41 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:42.998 09:55:41 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:42.998 09:55:41 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:42.998 09:55:41 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:42.999 09:55:41 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:42.999 09:55:41 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:42.999 09:55:41 -- target/referrals.sh@26 -- # sort 00:08:43.257 09:55:41 -- target/referrals.sh@26 -- # echo 00:08:43.257 09:55:41 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:43.257 09:55:41 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:43.257 09:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.258 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:43.258 09:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.258 09:55:41 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:43.258 09:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.258 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:43.258 09:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.258 09:55:41 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:43.258 09:55:41 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:43.258 09:55:41 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:43.258 09:55:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.258 09:55:41 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:43.258 09:55:41 -- common/autotest_common.sh@10 -- # set +x 00:08:43.258 09:55:41 -- target/referrals.sh@21 -- # sort 00:08:43.258 09:55:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.258 09:55:41 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:43.258 09:55:41 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:43.258 09:55:41 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:43.258 09:55:41 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:43.258 09:55:41 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:43.258 09:55:41 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.258 09:55:41 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:43.258 09:55:41 -- target/referrals.sh@26 -- # sort 00:08:43.258 09:55:41 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:43.258 09:55:41 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:43.258 09:55:41 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:43.258 09:55:41 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:43.258 09:55:41 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:43.258 09:55:41 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.258 09:55:41 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:43.517 09:55:41 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:43.517 09:55:41 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:43.517 09:55:41 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:43.517 09:55:41 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:43.517 09:55:41 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.517 09:55:41 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:43.517 09:55:42 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:43.517 09:55:42 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:43.517 09:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.517 09:55:42 -- common/autotest_common.sh@10 -- # set +x 00:08:43.517 09:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.517 09:55:42 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:43.517 09:55:42 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:43.517 09:55:42 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:43.517 09:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.517 09:55:42 -- common/autotest_common.sh@10 -- # set +x 00:08:43.517 09:55:42 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:43.517 09:55:42 -- target/referrals.sh@21 -- # sort 00:08:43.517 09:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.776 09:55:42 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:43.776 09:55:42 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:43.776 09:55:42 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:43.776 09:55:42 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:43.776 09:55:42 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:43.776 09:55:42 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.776 09:55:42 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:43.776 09:55:42 -- target/referrals.sh@26 -- # sort 00:08:43.776 09:55:42 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:43.776 09:55:42 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:43.776 09:55:42 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:43.776 09:55:42 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:43.776 09:55:42 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:43.776 09:55:42 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.776 09:55:42 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:43.776 09:55:42 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:43.776 09:55:42 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:43.776 09:55:42 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:43.776 09:55:42 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:43.776 09:55:42 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:43.776 09:55:42 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:44.035 09:55:42 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:44.035 09:55:42 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:44.035 09:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.035 09:55:42 -- common/autotest_common.sh@10 -- # set +x 00:08:44.035 09:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.035 09:55:42 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.035 09:55:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.035 09:55:42 -- target/referrals.sh@82 -- # jq length 00:08:44.035 09:55:42 -- common/autotest_common.sh@10 -- # set +x 00:08:44.035 09:55:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.035 09:55:42 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:44.036 09:55:42 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:44.036 09:55:42 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:44.036 09:55:42 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:44.036 09:55:42 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.036 09:55:42 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:44.036 09:55:42 -- target/referrals.sh@26 -- # sort 00:08:44.295 09:55:42 -- target/referrals.sh@26 -- # echo 00:08:44.295 09:55:42 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:44.295 09:55:42 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:44.295 09:55:42 -- target/referrals.sh@86 -- # nvmftestfini 00:08:44.295 09:55:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:44.295 09:55:42 -- nvmf/common.sh@116 -- # sync 00:08:44.295 09:55:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:44.295 09:55:42 -- nvmf/common.sh@119 -- # set +e 00:08:44.295 09:55:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:44.295 09:55:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:44.295 rmmod nvme_tcp 00:08:44.295 rmmod nvme_fabrics 00:08:44.295 rmmod nvme_keyring 00:08:44.295 09:55:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:44.295 09:55:42 -- nvmf/common.sh@123 -- # set -e 00:08:44.295 09:55:42 -- nvmf/common.sh@124 -- # return 0 00:08:44.295 09:55:42 -- nvmf/common.sh@477 -- # '[' -n 73686 ']' 00:08:44.295 09:55:42 -- nvmf/common.sh@478 -- # killprocess 73686 00:08:44.295 09:55:42 -- common/autotest_common.sh@936 -- # '[' -z 73686 ']' 00:08:44.295 09:55:42 -- common/autotest_common.sh@940 -- # kill -0 73686 00:08:44.295 09:55:42 -- common/autotest_common.sh@941 -- # uname 00:08:44.295 09:55:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:44.295 09:55:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73686 00:08:44.295 09:55:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:44.295 09:55:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:44.295 09:55:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73686' 00:08:44.295 killing process with pid 73686 00:08:44.295 09:55:42 -- common/autotest_common.sh@955 -- # kill 73686 00:08:44.295 09:55:42 -- common/autotest_common.sh@960 -- # wait 73686 00:08:44.554 09:55:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:44.554 09:55:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:44.554 09:55:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:44.554 09:55:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:44.554 09:55:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:44.554 09:55:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.554 09:55:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.554 09:55:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.554 09:55:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:44.554 00:08:44.554 real 0m3.744s 00:08:44.554 user 0m12.586s 00:08:44.554 sys 0m0.941s 00:08:44.554 09:55:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:44.554 09:55:43 -- common/autotest_common.sh@10 -- # set +x 00:08:44.554 ************************************ 00:08:44.554 END TEST nvmf_referrals 00:08:44.554 ************************************ 00:08:44.554 09:55:43 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:44.554 09:55:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:44.554 09:55:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:44.554 09:55:43 -- common/autotest_common.sh@10 -- # set +x 00:08:44.554 ************************************ 00:08:44.554 START TEST nvmf_connect_disconnect 00:08:44.554 ************************************ 00:08:44.554 09:55:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:44.813 * Looking for test storage... 00:08:44.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:44.814 09:55:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:44.814 09:55:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:44.814 09:55:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:44.814 09:55:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:44.814 09:55:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:44.814 09:55:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:44.814 09:55:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:44.814 09:55:43 -- scripts/common.sh@335 -- # IFS=.-: 00:08:44.814 09:55:43 -- scripts/common.sh@335 -- # read -ra ver1 00:08:44.814 09:55:43 -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.814 09:55:43 -- scripts/common.sh@336 -- # read -ra ver2 00:08:44.814 09:55:43 -- scripts/common.sh@337 -- # local 'op=<' 00:08:44.814 09:55:43 -- scripts/common.sh@339 -- # ver1_l=2 00:08:44.814 09:55:43 -- scripts/common.sh@340 -- # ver2_l=1 00:08:44.814 09:55:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:44.814 09:55:43 -- scripts/common.sh@343 -- # case "$op" in 00:08:44.814 09:55:43 -- scripts/common.sh@344 -- # : 1 00:08:44.814 09:55:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:44.814 09:55:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.814 09:55:43 -- scripts/common.sh@364 -- # decimal 1 00:08:44.814 09:55:43 -- scripts/common.sh@352 -- # local d=1 00:08:44.814 09:55:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.814 09:55:43 -- scripts/common.sh@354 -- # echo 1 00:08:44.814 09:55:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:44.814 09:55:43 -- scripts/common.sh@365 -- # decimal 2 00:08:44.814 09:55:43 -- scripts/common.sh@352 -- # local d=2 00:08:44.814 09:55:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.814 09:55:43 -- scripts/common.sh@354 -- # echo 2 00:08:44.814 09:55:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:44.814 09:55:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:44.814 09:55:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:44.814 09:55:43 -- scripts/common.sh@367 -- # return 0 00:08:44.814 09:55:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.814 09:55:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:44.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.814 --rc genhtml_branch_coverage=1 00:08:44.814 --rc genhtml_function_coverage=1 00:08:44.814 --rc genhtml_legend=1 00:08:44.814 --rc geninfo_all_blocks=1 00:08:44.814 --rc geninfo_unexecuted_blocks=1 00:08:44.814 00:08:44.814 ' 00:08:44.814 09:55:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:44.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.814 --rc genhtml_branch_coverage=1 00:08:44.814 --rc genhtml_function_coverage=1 00:08:44.814 --rc genhtml_legend=1 00:08:44.814 --rc geninfo_all_blocks=1 00:08:44.814 --rc geninfo_unexecuted_blocks=1 00:08:44.814 00:08:44.814 ' 00:08:44.814 09:55:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:44.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.814 --rc genhtml_branch_coverage=1 00:08:44.814 --rc genhtml_function_coverage=1 00:08:44.814 --rc genhtml_legend=1 00:08:44.814 --rc geninfo_all_blocks=1 00:08:44.814 --rc geninfo_unexecuted_blocks=1 00:08:44.814 00:08:44.814 ' 00:08:44.814 09:55:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:44.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.814 --rc genhtml_branch_coverage=1 00:08:44.814 --rc genhtml_function_coverage=1 00:08:44.814 --rc genhtml_legend=1 00:08:44.814 --rc geninfo_all_blocks=1 00:08:44.814 --rc geninfo_unexecuted_blocks=1 00:08:44.814 00:08:44.814 ' 00:08:44.814 09:55:43 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:44.814 09:55:43 -- nvmf/common.sh@7 -- # uname -s 00:08:44.814 09:55:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.814 09:55:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.814 09:55:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.814 09:55:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.814 09:55:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.814 09:55:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.814 09:55:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.814 09:55:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.814 09:55:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.814 09:55:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.814 09:55:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:08:44.814 09:55:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:08:44.814 09:55:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.814 09:55:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.814 09:55:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:44.814 09:55:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:44.814 09:55:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.814 09:55:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.814 09:55:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.814 09:55:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.814 09:55:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.814 09:55:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.814 09:55:43 -- paths/export.sh@5 -- # export PATH 00:08:44.814 09:55:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.814 09:55:43 -- nvmf/common.sh@46 -- # : 0 00:08:44.814 09:55:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:44.814 09:55:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:44.814 09:55:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:44.814 09:55:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.814 09:55:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.814 09:55:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:44.814 09:55:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:44.814 09:55:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:44.814 09:55:43 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.814 09:55:43 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.814 09:55:43 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:44.814 09:55:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:44.814 09:55:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.814 09:55:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:44.814 09:55:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:44.815 09:55:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:44.815 09:55:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.815 09:55:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.815 09:55:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.815 09:55:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:44.815 09:55:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:44.815 09:55:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:44.815 09:55:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:44.815 09:55:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:44.815 09:55:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:44.815 09:55:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.815 09:55:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.815 09:55:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:44.815 09:55:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:44.815 09:55:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:44.815 09:55:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:44.815 09:55:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:44.815 09:55:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.815 09:55:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:44.815 09:55:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:44.815 09:55:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:44.815 09:55:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:44.815 09:55:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:44.815 09:55:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:44.815 Cannot find device "nvmf_tgt_br" 00:08:44.815 09:55:43 -- nvmf/common.sh@154 -- # true 00:08:44.815 09:55:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:44.815 Cannot find device "nvmf_tgt_br2" 00:08:44.815 09:55:43 -- nvmf/common.sh@155 -- # true 00:08:44.815 09:55:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:44.815 09:55:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:44.815 Cannot find device "nvmf_tgt_br" 00:08:44.815 09:55:43 -- nvmf/common.sh@157 -- # true 00:08:44.815 09:55:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:45.074 Cannot find device "nvmf_tgt_br2" 00:08:45.074 09:55:43 -- nvmf/common.sh@158 -- # true 00:08:45.074 09:55:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:45.074 09:55:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:45.074 09:55:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:45.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.074 09:55:43 -- nvmf/common.sh@161 -- # true 00:08:45.074 09:55:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:45.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.074 09:55:43 -- nvmf/common.sh@162 -- # true 00:08:45.074 09:55:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:45.074 09:55:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:45.074 09:55:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:45.074 09:55:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:45.074 09:55:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:45.074 09:55:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:45.074 09:55:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:45.074 09:55:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:45.074 09:55:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:45.074 09:55:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:45.074 09:55:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:45.074 09:55:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:45.074 09:55:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:45.074 09:55:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:45.074 09:55:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:45.074 09:55:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:45.074 09:55:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:45.074 09:55:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:45.074 09:55:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:45.074 09:55:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:45.074 09:55:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:45.074 09:55:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:45.074 09:55:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:45.074 09:55:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:45.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:45.074 00:08:45.074 --- 10.0.0.2 ping statistics --- 00:08:45.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.074 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:45.074 09:55:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:45.074 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:45.074 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:08:45.074 00:08:45.074 --- 10.0.0.3 ping statistics --- 00:08:45.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.074 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:45.074 09:55:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:45.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:45.074 00:08:45.074 --- 10.0.0.1 ping statistics --- 00:08:45.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.075 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:45.075 09:55:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.075 09:55:43 -- nvmf/common.sh@421 -- # return 0 00:08:45.075 09:55:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:45.075 09:55:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.075 09:55:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:45.075 09:55:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:45.075 09:55:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.075 09:55:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:45.075 09:55:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:45.334 09:55:43 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:45.334 09:55:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:45.334 09:55:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:45.334 09:55:43 -- common/autotest_common.sh@10 -- # set +x 00:08:45.334 09:55:43 -- nvmf/common.sh@469 -- # nvmfpid=73996 00:08:45.334 09:55:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:45.334 09:55:43 -- nvmf/common.sh@470 -- # waitforlisten 73996 00:08:45.334 09:55:43 -- common/autotest_common.sh@829 -- # '[' -z 73996 ']' 00:08:45.334 09:55:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.334 09:55:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.334 09:55:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.334 09:55:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.334 09:55:43 -- common/autotest_common.sh@10 -- # set +x 00:08:45.334 [2024-12-16 09:55:43.770334] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:45.334 [2024-12-16 09:55:43.770465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.334 [2024-12-16 09:55:43.914575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.593 [2024-12-16 09:55:43.990695] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:45.593 [2024-12-16 09:55:43.990837] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.593 [2024-12-16 09:55:43.990850] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.593 [2024-12-16 09:55:43.990858] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.593 [2024-12-16 09:55:43.991009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.593 [2024-12-16 09:55:43.992045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.593 [2024-12-16 09:55:43.992231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.593 [2024-12-16 09:55:43.992236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.531 09:55:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.531 09:55:44 -- common/autotest_common.sh@862 -- # return 0 00:08:46.531 09:55:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:46.531 09:55:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:46.531 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:08:46.531 09:55:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.531 09:55:44 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:46.531 09:55:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.531 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:08:46.531 [2024-12-16 09:55:44.889961] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.531 09:55:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.531 09:55:44 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:46.531 09:55:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.531 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:08:46.531 09:55:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.531 09:55:44 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:46.531 09:55:44 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:46.531 09:55:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.531 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:08:46.531 09:55:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.531 09:55:44 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:46.531 09:55:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.531 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:08:46.531 09:55:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.531 09:55:44 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.531 09:55:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.531 09:55:44 -- common/autotest_common.sh@10 -- # set +x 00:08:46.531 [2024-12-16 09:55:44.973287] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.531 09:55:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.531 09:55:44 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:46.531 09:55:44 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:46.531 09:55:44 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:46.531 09:55:44 -- target/connect_disconnect.sh@34 -- # set +x 00:08:49.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.597 09:59:28 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:30.597 09:59:28 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:30.597 09:59:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:30.597 09:59:28 -- nvmf/common.sh@116 -- # sync 00:12:30.597 09:59:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:30.597 09:59:28 -- nvmf/common.sh@119 -- # set +e 00:12:30.597 09:59:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:30.597 09:59:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:30.597 rmmod nvme_tcp 00:12:30.597 rmmod nvme_fabrics 00:12:30.597 rmmod nvme_keyring 00:12:30.597 09:59:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:30.597 09:59:28 -- nvmf/common.sh@123 -- # set -e 00:12:30.597 09:59:28 -- nvmf/common.sh@124 -- # return 0 00:12:30.597 09:59:28 -- nvmf/common.sh@477 -- # '[' -n 73996 ']' 00:12:30.597 09:59:28 -- nvmf/common.sh@478 -- # killprocess 73996 00:12:30.597 09:59:28 -- common/autotest_common.sh@936 -- # '[' -z 73996 ']' 00:12:30.597 09:59:28 -- common/autotest_common.sh@940 -- # kill -0 73996 00:12:30.597 09:59:28 -- common/autotest_common.sh@941 -- # uname 00:12:30.597 09:59:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.597 09:59:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73996 00:12:30.597 killing process with pid 73996 00:12:30.597 09:59:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:30.597 09:59:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:30.597 09:59:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73996' 00:12:30.597 09:59:28 -- common/autotest_common.sh@955 -- # kill 73996 00:12:30.597 09:59:28 -- common/autotest_common.sh@960 -- # wait 73996 00:12:30.597 09:59:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:30.597 09:59:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:30.597 09:59:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:30.597 09:59:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.597 09:59:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:30.597 09:59:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.597 09:59:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.597 09:59:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.597 09:59:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:30.597 00:12:30.597 real 3m46.044s 00:12:30.597 user 14m37.326s 00:12:30.597 sys 0m26.246s 00:12:30.597 09:59:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:30.597 ************************************ 00:12:30.597 END TEST nvmf_connect_disconnect 00:12:30.597 ************************************ 00:12:30.597 09:59:29 -- common/autotest_common.sh@10 -- # set +x 00:12:30.857 09:59:29 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:30.857 09:59:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:30.857 09:59:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.857 09:59:29 -- common/autotest_common.sh@10 -- # set +x 00:12:30.857 ************************************ 00:12:30.857 START TEST nvmf_multitarget 00:12:30.857 ************************************ 00:12:30.857 09:59:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:30.857 * Looking for test storage... 00:12:30.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:30.857 09:59:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:30.857 09:59:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:30.857 09:59:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:30.857 09:59:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:30.857 09:59:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:30.857 09:59:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:30.857 09:59:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:30.857 09:59:29 -- scripts/common.sh@335 -- # IFS=.-: 00:12:30.857 09:59:29 -- scripts/common.sh@335 -- # read -ra ver1 00:12:30.857 09:59:29 -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.857 09:59:29 -- scripts/common.sh@336 -- # read -ra ver2 00:12:30.857 09:59:29 -- scripts/common.sh@337 -- # local 'op=<' 00:12:30.857 09:59:29 -- scripts/common.sh@339 -- # ver1_l=2 00:12:30.857 09:59:29 -- scripts/common.sh@340 -- # ver2_l=1 00:12:30.857 09:59:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:30.857 09:59:29 -- scripts/common.sh@343 -- # case "$op" in 00:12:30.857 09:59:29 -- scripts/common.sh@344 -- # : 1 00:12:30.857 09:59:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:30.857 09:59:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.857 09:59:29 -- scripts/common.sh@364 -- # decimal 1 00:12:30.857 09:59:29 -- scripts/common.sh@352 -- # local d=1 00:12:30.857 09:59:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.857 09:59:29 -- scripts/common.sh@354 -- # echo 1 00:12:30.857 09:59:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:30.857 09:59:29 -- scripts/common.sh@365 -- # decimal 2 00:12:30.857 09:59:29 -- scripts/common.sh@352 -- # local d=2 00:12:30.857 09:59:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.857 09:59:29 -- scripts/common.sh@354 -- # echo 2 00:12:30.857 09:59:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:30.857 09:59:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:30.857 09:59:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:30.857 09:59:29 -- scripts/common.sh@367 -- # return 0 00:12:30.857 09:59:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.857 09:59:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:30.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.857 --rc genhtml_branch_coverage=1 00:12:30.857 --rc genhtml_function_coverage=1 00:12:30.857 --rc genhtml_legend=1 00:12:30.857 --rc geninfo_all_blocks=1 00:12:30.857 --rc geninfo_unexecuted_blocks=1 00:12:30.857 00:12:30.857 ' 00:12:30.857 09:59:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:30.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.857 --rc genhtml_branch_coverage=1 00:12:30.857 --rc genhtml_function_coverage=1 00:12:30.857 --rc genhtml_legend=1 00:12:30.857 --rc geninfo_all_blocks=1 00:12:30.857 --rc geninfo_unexecuted_blocks=1 00:12:30.857 00:12:30.857 ' 00:12:30.857 09:59:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:30.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.857 --rc genhtml_branch_coverage=1 00:12:30.857 --rc genhtml_function_coverage=1 00:12:30.857 --rc genhtml_legend=1 00:12:30.857 --rc geninfo_all_blocks=1 00:12:30.857 --rc geninfo_unexecuted_blocks=1 00:12:30.857 00:12:30.857 ' 00:12:30.857 09:59:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:30.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.857 --rc genhtml_branch_coverage=1 00:12:30.857 --rc genhtml_function_coverage=1 00:12:30.857 --rc genhtml_legend=1 00:12:30.857 --rc geninfo_all_blocks=1 00:12:30.857 --rc geninfo_unexecuted_blocks=1 00:12:30.857 00:12:30.857 ' 00:12:30.857 09:59:29 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:30.857 09:59:29 -- nvmf/common.sh@7 -- # uname -s 00:12:30.857 09:59:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.857 09:59:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.857 09:59:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.857 09:59:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.857 09:59:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.857 09:59:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.857 09:59:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.857 09:59:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.857 09:59:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.857 09:59:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.857 09:59:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:12:30.857 09:59:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:12:30.857 09:59:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.857 09:59:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.857 09:59:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:30.857 09:59:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:30.857 09:59:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.857 09:59:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.857 09:59:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.857 09:59:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.857 09:59:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.857 09:59:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.857 09:59:29 -- paths/export.sh@5 -- # export PATH 00:12:30.857 09:59:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.857 09:59:29 -- nvmf/common.sh@46 -- # : 0 00:12:30.857 09:59:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:30.857 09:59:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:30.857 09:59:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:30.857 09:59:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.857 09:59:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.857 09:59:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:30.857 09:59:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:30.857 09:59:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:30.857 09:59:29 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:30.857 09:59:29 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:30.857 09:59:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:30.857 09:59:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.857 09:59:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:30.857 09:59:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:30.857 09:59:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:30.857 09:59:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.857 09:59:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.857 09:59:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.116 09:59:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:31.116 09:59:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:31.116 09:59:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:31.116 09:59:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:31.116 09:59:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:31.116 09:59:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:31.116 09:59:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.116 09:59:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.116 09:59:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:31.116 09:59:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:31.117 09:59:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.117 09:59:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.117 09:59:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.117 09:59:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.117 09:59:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.117 09:59:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.117 09:59:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.117 09:59:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.117 09:59:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:31.117 09:59:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:31.117 Cannot find device "nvmf_tgt_br" 00:12:31.117 09:59:29 -- nvmf/common.sh@154 -- # true 00:12:31.117 09:59:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.117 Cannot find device "nvmf_tgt_br2" 00:12:31.117 09:59:29 -- nvmf/common.sh@155 -- # true 00:12:31.117 09:59:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:31.117 09:59:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:31.117 Cannot find device "nvmf_tgt_br" 00:12:31.117 09:59:29 -- nvmf/common.sh@157 -- # true 00:12:31.117 09:59:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:31.117 Cannot find device "nvmf_tgt_br2" 00:12:31.117 09:59:29 -- nvmf/common.sh@158 -- # true 00:12:31.117 09:59:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:31.117 09:59:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:31.117 09:59:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.117 09:59:29 -- nvmf/common.sh@161 -- # true 00:12:31.117 09:59:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.117 09:59:29 -- nvmf/common.sh@162 -- # true 00:12:31.117 09:59:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:31.117 09:59:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:31.117 09:59:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:31.117 09:59:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:31.117 09:59:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:31.117 09:59:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:31.117 09:59:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:31.117 09:59:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:31.117 09:59:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:31.117 09:59:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:31.117 09:59:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:31.117 09:59:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:31.117 09:59:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:31.117 09:59:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:31.117 09:59:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:31.117 09:59:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:31.117 09:59:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:31.117 09:59:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:31.117 09:59:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:31.376 09:59:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:31.376 09:59:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:31.376 09:59:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:31.376 09:59:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:31.376 09:59:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:31.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:12:31.376 00:12:31.376 --- 10.0.0.2 ping statistics --- 00:12:31.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.376 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:31.376 09:59:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:31.376 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:31.376 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:12:31.376 00:12:31.376 --- 10.0.0.3 ping statistics --- 00:12:31.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.376 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:31.376 09:59:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:31.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:31.376 00:12:31.376 --- 10.0.0.1 ping statistics --- 00:12:31.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.376 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:31.376 09:59:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.376 09:59:29 -- nvmf/common.sh@421 -- # return 0 00:12:31.376 09:59:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:31.376 09:59:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.376 09:59:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:31.376 09:59:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:31.376 09:59:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.376 09:59:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:31.376 09:59:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:31.376 09:59:29 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:31.376 09:59:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:31.376 09:59:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:31.376 09:59:29 -- common/autotest_common.sh@10 -- # set +x 00:12:31.376 09:59:29 -- nvmf/common.sh@469 -- # nvmfpid=77792 00:12:31.376 09:59:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.376 09:59:29 -- nvmf/common.sh@470 -- # waitforlisten 77792 00:12:31.376 09:59:29 -- common/autotest_common.sh@829 -- # '[' -z 77792 ']' 00:12:31.376 09:59:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.376 09:59:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.376 09:59:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.376 09:59:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.376 09:59:29 -- common/autotest_common.sh@10 -- # set +x 00:12:31.376 [2024-12-16 09:59:29.872120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:31.376 [2024-12-16 09:59:29.872223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.635 [2024-12-16 09:59:30.011112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.635 [2024-12-16 09:59:30.082607] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:31.635 [2024-12-16 09:59:30.082820] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.635 [2024-12-16 09:59:30.082834] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.635 [2024-12-16 09:59:30.082843] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.635 [2024-12-16 09:59:30.083059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.635 [2024-12-16 09:59:30.083162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.635 [2024-12-16 09:59:30.083408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.635 [2024-12-16 09:59:30.083408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.571 09:59:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.571 09:59:30 -- common/autotest_common.sh@862 -- # return 0 00:12:32.571 09:59:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:32.571 09:59:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:32.571 09:59:30 -- common/autotest_common.sh@10 -- # set +x 00:12:32.571 09:59:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.571 09:59:30 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:32.571 09:59:30 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:32.571 09:59:30 -- target/multitarget.sh@21 -- # jq length 00:12:32.571 09:59:31 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:32.571 09:59:31 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:32.830 "nvmf_tgt_1" 00:12:32.830 09:59:31 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:32.830 "nvmf_tgt_2" 00:12:32.830 09:59:31 -- target/multitarget.sh@28 -- # jq length 00:12:32.830 09:59:31 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:33.089 09:59:31 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:33.089 09:59:31 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:33.089 true 00:12:33.089 09:59:31 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:33.347 true 00:12:33.348 09:59:31 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:33.348 09:59:31 -- target/multitarget.sh@35 -- # jq length 00:12:33.348 09:59:31 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:33.348 09:59:31 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:33.348 09:59:31 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:33.348 09:59:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:33.348 09:59:31 -- nvmf/common.sh@116 -- # sync 00:12:33.606 09:59:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:33.606 09:59:31 -- nvmf/common.sh@119 -- # set +e 00:12:33.606 09:59:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:33.606 09:59:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:33.606 rmmod nvme_tcp 00:12:33.606 rmmod nvme_fabrics 00:12:33.606 rmmod nvme_keyring 00:12:33.606 09:59:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:33.606 09:59:32 -- nvmf/common.sh@123 -- # set -e 00:12:33.606 09:59:32 -- nvmf/common.sh@124 -- # return 0 00:12:33.606 09:59:32 -- nvmf/common.sh@477 -- # '[' -n 77792 ']' 00:12:33.607 09:59:32 -- nvmf/common.sh@478 -- # killprocess 77792 00:12:33.607 09:59:32 -- common/autotest_common.sh@936 -- # '[' -z 77792 ']' 00:12:33.607 09:59:32 -- common/autotest_common.sh@940 -- # kill -0 77792 00:12:33.607 09:59:32 -- common/autotest_common.sh@941 -- # uname 00:12:33.607 09:59:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:33.607 09:59:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77792 00:12:33.607 09:59:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:33.607 09:59:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:33.607 killing process with pid 77792 00:12:33.607 09:59:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77792' 00:12:33.607 09:59:32 -- common/autotest_common.sh@955 -- # kill 77792 00:12:33.607 09:59:32 -- common/autotest_common.sh@960 -- # wait 77792 00:12:33.865 09:59:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:33.865 09:59:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:33.865 09:59:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:33.865 09:59:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:33.865 09:59:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:33.865 09:59:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.865 09:59:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.865 09:59:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.865 09:59:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:33.865 00:12:33.865 real 0m3.018s 00:12:33.865 user 0m10.043s 00:12:33.865 sys 0m0.689s 00:12:33.865 09:59:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:33.865 09:59:32 -- common/autotest_common.sh@10 -- # set +x 00:12:33.865 ************************************ 00:12:33.865 END TEST nvmf_multitarget 00:12:33.865 ************************************ 00:12:33.865 09:59:32 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:33.866 09:59:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:33.866 09:59:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.866 09:59:32 -- common/autotest_common.sh@10 -- # set +x 00:12:33.866 ************************************ 00:12:33.866 START TEST nvmf_rpc 00:12:33.866 ************************************ 00:12:33.866 09:59:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:33.866 * Looking for test storage... 00:12:33.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:33.866 09:59:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:33.866 09:59:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:33.866 09:59:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:33.866 09:59:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:33.866 09:59:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:33.866 09:59:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:33.866 09:59:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:33.866 09:59:32 -- scripts/common.sh@335 -- # IFS=.-: 00:12:33.866 09:59:32 -- scripts/common.sh@335 -- # read -ra ver1 00:12:33.866 09:59:32 -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.866 09:59:32 -- scripts/common.sh@336 -- # read -ra ver2 00:12:33.866 09:59:32 -- scripts/common.sh@337 -- # local 'op=<' 00:12:33.866 09:59:32 -- scripts/common.sh@339 -- # ver1_l=2 00:12:33.866 09:59:32 -- scripts/common.sh@340 -- # ver2_l=1 00:12:33.866 09:59:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:33.866 09:59:32 -- scripts/common.sh@343 -- # case "$op" in 00:12:33.866 09:59:32 -- scripts/common.sh@344 -- # : 1 00:12:33.866 09:59:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:33.866 09:59:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.866 09:59:32 -- scripts/common.sh@364 -- # decimal 1 00:12:33.866 09:59:32 -- scripts/common.sh@352 -- # local d=1 00:12:33.866 09:59:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.866 09:59:32 -- scripts/common.sh@354 -- # echo 1 00:12:34.125 09:59:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:34.125 09:59:32 -- scripts/common.sh@365 -- # decimal 2 00:12:34.125 09:59:32 -- scripts/common.sh@352 -- # local d=2 00:12:34.125 09:59:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.125 09:59:32 -- scripts/common.sh@354 -- # echo 2 00:12:34.125 09:59:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:34.125 09:59:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:34.125 09:59:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:34.125 09:59:32 -- scripts/common.sh@367 -- # return 0 00:12:34.125 09:59:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.125 09:59:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:34.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.125 --rc genhtml_branch_coverage=1 00:12:34.125 --rc genhtml_function_coverage=1 00:12:34.125 --rc genhtml_legend=1 00:12:34.125 --rc geninfo_all_blocks=1 00:12:34.125 --rc geninfo_unexecuted_blocks=1 00:12:34.125 00:12:34.125 ' 00:12:34.125 09:59:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:34.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.125 --rc genhtml_branch_coverage=1 00:12:34.125 --rc genhtml_function_coverage=1 00:12:34.125 --rc genhtml_legend=1 00:12:34.125 --rc geninfo_all_blocks=1 00:12:34.125 --rc geninfo_unexecuted_blocks=1 00:12:34.125 00:12:34.125 ' 00:12:34.125 09:59:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:34.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.125 --rc genhtml_branch_coverage=1 00:12:34.125 --rc genhtml_function_coverage=1 00:12:34.125 --rc genhtml_legend=1 00:12:34.125 --rc geninfo_all_blocks=1 00:12:34.125 --rc geninfo_unexecuted_blocks=1 00:12:34.125 00:12:34.125 ' 00:12:34.125 09:59:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:34.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.125 --rc genhtml_branch_coverage=1 00:12:34.125 --rc genhtml_function_coverage=1 00:12:34.125 --rc genhtml_legend=1 00:12:34.125 --rc geninfo_all_blocks=1 00:12:34.125 --rc geninfo_unexecuted_blocks=1 00:12:34.125 00:12:34.125 ' 00:12:34.125 09:59:32 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:34.125 09:59:32 -- nvmf/common.sh@7 -- # uname -s 00:12:34.125 09:59:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.125 09:59:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.125 09:59:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.125 09:59:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.125 09:59:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.125 09:59:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.125 09:59:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.125 09:59:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.125 09:59:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.125 09:59:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.125 09:59:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:12:34.125 09:59:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:12:34.125 09:59:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.125 09:59:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.125 09:59:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:34.125 09:59:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:34.125 09:59:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.125 09:59:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.125 09:59:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.125 09:59:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.125 09:59:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.125 09:59:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.125 09:59:32 -- paths/export.sh@5 -- # export PATH 00:12:34.125 09:59:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.125 09:59:32 -- nvmf/common.sh@46 -- # : 0 00:12:34.125 09:59:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:34.125 09:59:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:34.125 09:59:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:34.125 09:59:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.125 09:59:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.125 09:59:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:34.125 09:59:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:34.125 09:59:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:34.125 09:59:32 -- target/rpc.sh@11 -- # loops=5 00:12:34.125 09:59:32 -- target/rpc.sh@23 -- # nvmftestinit 00:12:34.125 09:59:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:34.126 09:59:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.126 09:59:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:34.126 09:59:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:34.126 09:59:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:34.126 09:59:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.126 09:59:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.126 09:59:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.126 09:59:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:34.126 09:59:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:34.126 09:59:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:34.126 09:59:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:34.126 09:59:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:34.126 09:59:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:34.126 09:59:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.126 09:59:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.126 09:59:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:34.126 09:59:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:34.126 09:59:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:34.126 09:59:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:34.126 09:59:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:34.126 09:59:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.126 09:59:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:34.126 09:59:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:34.126 09:59:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:34.126 09:59:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:34.126 09:59:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:34.126 09:59:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:34.126 Cannot find device "nvmf_tgt_br" 00:12:34.126 09:59:32 -- nvmf/common.sh@154 -- # true 00:12:34.126 09:59:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:34.126 Cannot find device "nvmf_tgt_br2" 00:12:34.126 09:59:32 -- nvmf/common.sh@155 -- # true 00:12:34.126 09:59:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:34.126 09:59:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:34.126 Cannot find device "nvmf_tgt_br" 00:12:34.126 09:59:32 -- nvmf/common.sh@157 -- # true 00:12:34.126 09:59:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:34.126 Cannot find device "nvmf_tgt_br2" 00:12:34.126 09:59:32 -- nvmf/common.sh@158 -- # true 00:12:34.126 09:59:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:34.126 09:59:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:34.126 09:59:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:34.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.126 09:59:32 -- nvmf/common.sh@161 -- # true 00:12:34.126 09:59:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:34.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.126 09:59:32 -- nvmf/common.sh@162 -- # true 00:12:34.126 09:59:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:34.126 09:59:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:34.126 09:59:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:34.126 09:59:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:34.126 09:59:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:34.126 09:59:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:34.385 09:59:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:34.385 09:59:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:34.385 09:59:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:34.385 09:59:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:34.385 09:59:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:34.385 09:59:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:34.385 09:59:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:34.385 09:59:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:34.385 09:59:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:34.385 09:59:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:34.385 09:59:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:34.385 09:59:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:34.385 09:59:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:34.385 09:59:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:34.385 09:59:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:34.385 09:59:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:34.385 09:59:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:34.385 09:59:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:34.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:12:34.385 00:12:34.385 --- 10.0.0.2 ping statistics --- 00:12:34.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.385 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:34.385 09:59:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:34.385 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:34.385 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:12:34.385 00:12:34.385 --- 10.0.0.3 ping statistics --- 00:12:34.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.385 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:34.385 09:59:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:34.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:34.385 00:12:34.385 --- 10.0.0.1 ping statistics --- 00:12:34.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.385 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:34.385 09:59:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.385 09:59:32 -- nvmf/common.sh@421 -- # return 0 00:12:34.385 09:59:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:34.385 09:59:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.385 09:59:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:34.385 09:59:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:34.385 09:59:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.385 09:59:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:34.385 09:59:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:34.385 09:59:32 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:34.385 09:59:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:34.385 09:59:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:34.385 09:59:32 -- common/autotest_common.sh@10 -- # set +x 00:12:34.385 09:59:32 -- nvmf/common.sh@469 -- # nvmfpid=78031 00:12:34.385 09:59:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.385 09:59:32 -- nvmf/common.sh@470 -- # waitforlisten 78031 00:12:34.385 09:59:32 -- common/autotest_common.sh@829 -- # '[' -z 78031 ']' 00:12:34.385 09:59:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.385 09:59:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:34.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.385 09:59:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.385 09:59:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:34.385 09:59:32 -- common/autotest_common.sh@10 -- # set +x 00:12:34.385 [2024-12-16 09:59:32.951244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:34.385 [2024-12-16 09:59:32.951389] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.644 [2024-12-16 09:59:33.090809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.644 [2024-12-16 09:59:33.153997] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:34.644 [2024-12-16 09:59:33.154147] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.644 [2024-12-16 09:59:33.154159] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.644 [2024-12-16 09:59:33.154166] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.645 [2024-12-16 09:59:33.154305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.645 [2024-12-16 09:59:33.154496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.645 [2024-12-16 09:59:33.155155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.645 [2024-12-16 09:59:33.155193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.581 09:59:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:35.581 09:59:33 -- common/autotest_common.sh@862 -- # return 0 00:12:35.581 09:59:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:35.581 09:59:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:35.581 09:59:33 -- common/autotest_common.sh@10 -- # set +x 00:12:35.581 09:59:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.581 09:59:33 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:35.581 09:59:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.581 09:59:33 -- common/autotest_common.sh@10 -- # set +x 00:12:35.581 09:59:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.581 09:59:33 -- target/rpc.sh@26 -- # stats='{ 00:12:35.581 "poll_groups": [ 00:12:35.581 { 00:12:35.581 "admin_qpairs": 0, 00:12:35.581 "completed_nvme_io": 0, 00:12:35.581 "current_admin_qpairs": 0, 00:12:35.581 "current_io_qpairs": 0, 00:12:35.581 "io_qpairs": 0, 00:12:35.581 "name": "nvmf_tgt_poll_group_0", 00:12:35.581 "pending_bdev_io": 0, 00:12:35.581 "transports": [] 00:12:35.581 }, 00:12:35.581 { 00:12:35.581 "admin_qpairs": 0, 00:12:35.581 "completed_nvme_io": 0, 00:12:35.581 "current_admin_qpairs": 0, 00:12:35.581 "current_io_qpairs": 0, 00:12:35.581 "io_qpairs": 0, 00:12:35.581 "name": "nvmf_tgt_poll_group_1", 00:12:35.581 "pending_bdev_io": 0, 00:12:35.581 "transports": [] 00:12:35.581 }, 00:12:35.581 { 00:12:35.581 "admin_qpairs": 0, 00:12:35.581 "completed_nvme_io": 0, 00:12:35.581 "current_admin_qpairs": 0, 00:12:35.581 "current_io_qpairs": 0, 00:12:35.581 "io_qpairs": 0, 00:12:35.581 "name": "nvmf_tgt_poll_group_2", 00:12:35.581 "pending_bdev_io": 0, 00:12:35.581 "transports": [] 00:12:35.581 }, 00:12:35.581 { 00:12:35.581 "admin_qpairs": 0, 00:12:35.581 "completed_nvme_io": 0, 00:12:35.581 "current_admin_qpairs": 0, 00:12:35.581 "current_io_qpairs": 0, 00:12:35.581 "io_qpairs": 0, 00:12:35.581 "name": "nvmf_tgt_poll_group_3", 00:12:35.581 "pending_bdev_io": 0, 00:12:35.581 "transports": [] 00:12:35.581 } 00:12:35.581 ], 00:12:35.581 "tick_rate": 2200000000 00:12:35.581 }' 00:12:35.581 09:59:33 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:35.581 09:59:33 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:35.581 09:59:33 -- target/rpc.sh@15 -- # wc -l 00:12:35.581 09:59:33 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:35.581 09:59:34 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:35.581 09:59:34 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:35.581 09:59:34 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:35.581 09:59:34 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.581 09:59:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.581 09:59:34 -- common/autotest_common.sh@10 -- # set +x 00:12:35.581 [2024-12-16 09:59:34.070130] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.581 09:59:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.581 09:59:34 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:35.581 09:59:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.581 09:59:34 -- common/autotest_common.sh@10 -- # set +x 00:12:35.581 09:59:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.581 09:59:34 -- target/rpc.sh@33 -- # stats='{ 00:12:35.581 "poll_groups": [ 00:12:35.581 { 00:12:35.581 "admin_qpairs": 0, 00:12:35.581 "completed_nvme_io": 0, 00:12:35.581 "current_admin_qpairs": 0, 00:12:35.581 "current_io_qpairs": 0, 00:12:35.581 "io_qpairs": 0, 00:12:35.581 "name": "nvmf_tgt_poll_group_0", 00:12:35.581 "pending_bdev_io": 0, 00:12:35.581 "transports": [ 00:12:35.581 { 00:12:35.581 "trtype": "TCP" 00:12:35.581 } 00:12:35.581 ] 00:12:35.582 }, 00:12:35.582 { 00:12:35.582 "admin_qpairs": 0, 00:12:35.582 "completed_nvme_io": 0, 00:12:35.582 "current_admin_qpairs": 0, 00:12:35.582 "current_io_qpairs": 0, 00:12:35.582 "io_qpairs": 0, 00:12:35.582 "name": "nvmf_tgt_poll_group_1", 00:12:35.582 "pending_bdev_io": 0, 00:12:35.582 "transports": [ 00:12:35.582 { 00:12:35.582 "trtype": "TCP" 00:12:35.582 } 00:12:35.582 ] 00:12:35.582 }, 00:12:35.582 { 00:12:35.582 "admin_qpairs": 0, 00:12:35.582 "completed_nvme_io": 0, 00:12:35.582 "current_admin_qpairs": 0, 00:12:35.582 "current_io_qpairs": 0, 00:12:35.582 "io_qpairs": 0, 00:12:35.582 "name": "nvmf_tgt_poll_group_2", 00:12:35.582 "pending_bdev_io": 0, 00:12:35.582 "transports": [ 00:12:35.582 { 00:12:35.582 "trtype": "TCP" 00:12:35.582 } 00:12:35.582 ] 00:12:35.582 }, 00:12:35.582 { 00:12:35.582 "admin_qpairs": 0, 00:12:35.582 "completed_nvme_io": 0, 00:12:35.582 "current_admin_qpairs": 0, 00:12:35.582 "current_io_qpairs": 0, 00:12:35.582 "io_qpairs": 0, 00:12:35.582 "name": "nvmf_tgt_poll_group_3", 00:12:35.582 "pending_bdev_io": 0, 00:12:35.582 "transports": [ 00:12:35.582 { 00:12:35.582 "trtype": "TCP" 00:12:35.582 } 00:12:35.582 ] 00:12:35.582 } 00:12:35.582 ], 00:12:35.582 "tick_rate": 2200000000 00:12:35.582 }' 00:12:35.582 09:59:34 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:35.582 09:59:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:35.582 09:59:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:35.582 09:59:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:35.582 09:59:34 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:35.582 09:59:34 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:35.582 09:59:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:35.582 09:59:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:35.582 09:59:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:35.840 09:59:34 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:35.840 09:59:34 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:35.840 09:59:34 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:35.840 09:59:34 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:35.840 09:59:34 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:35.840 09:59:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.840 09:59:34 -- common/autotest_common.sh@10 -- # set +x 00:12:35.840 Malloc1 00:12:35.840 09:59:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.840 09:59:34 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:35.840 09:59:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.840 09:59:34 -- common/autotest_common.sh@10 -- # set +x 00:12:35.840 09:59:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.841 09:59:34 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:35.841 09:59:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.841 09:59:34 -- common/autotest_common.sh@10 -- # set +x 00:12:35.841 09:59:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.841 09:59:34 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:35.841 09:59:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.841 09:59:34 -- common/autotest_common.sh@10 -- # set +x 00:12:35.841 09:59:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.841 09:59:34 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.841 09:59:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.841 09:59:34 -- common/autotest_common.sh@10 -- # set +x 00:12:35.841 [2024-12-16 09:59:34.283592] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.841 09:59:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.841 09:59:34 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -a 10.0.0.2 -s 4420 00:12:35.841 09:59:34 -- common/autotest_common.sh@650 -- # local es=0 00:12:35.841 09:59:34 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -a 10.0.0.2 -s 4420 00:12:35.841 09:59:34 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:35.841 09:59:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.841 09:59:34 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:35.841 09:59:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.841 09:59:34 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:35.841 09:59:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.841 09:59:34 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:35.841 09:59:34 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:35.841 09:59:34 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -a 10.0.0.2 -s 4420 00:12:35.841 [2024-12-16 09:59:34.311831] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed' 00:12:35.841 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:35.841 could not add new controller: failed to write to nvme-fabrics device 00:12:35.841 09:59:34 -- common/autotest_common.sh@653 -- # es=1 00:12:35.841 09:59:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:35.841 09:59:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:35.841 09:59:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:35.841 09:59:34 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:12:35.841 09:59:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.841 09:59:34 -- common/autotest_common.sh@10 -- # set +x 00:12:35.841 09:59:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.841 09:59:34 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.099 09:59:34 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.099 09:59:34 -- common/autotest_common.sh@1187 -- # local i=0 00:12:36.099 09:59:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.099 09:59:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:36.099 09:59:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:38.002 09:59:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:38.002 09:59:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:38.002 09:59:36 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.002 09:59:36 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:38.002 09:59:36 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.002 09:59:36 -- common/autotest_common.sh@1197 -- # return 0 00:12:38.002 09:59:36 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.002 09:59:36 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.002 09:59:36 -- common/autotest_common.sh@1208 -- # local i=0 00:12:38.002 09:59:36 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:38.002 09:59:36 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.002 09:59:36 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:38.002 09:59:36 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.003 09:59:36 -- common/autotest_common.sh@1220 -- # return 0 00:12:38.003 09:59:36 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:12:38.003 09:59:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.003 09:59:36 -- common/autotest_common.sh@10 -- # set +x 00:12:38.003 09:59:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.003 09:59:36 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.003 09:59:36 -- common/autotest_common.sh@650 -- # local es=0 00:12:38.003 09:59:36 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.003 09:59:36 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:38.003 09:59:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.003 09:59:36 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:38.003 09:59:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.003 09:59:36 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:38.003 09:59:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.003 09:59:36 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:38.003 09:59:36 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:38.003 09:59:36 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.003 [2024-12-16 09:59:36.612781] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed' 00:12:38.003 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:38.003 could not add new controller: failed to write to nvme-fabrics device 00:12:38.003 09:59:36 -- common/autotest_common.sh@653 -- # es=1 00:12:38.003 09:59:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:38.003 09:59:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:38.003 09:59:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:38.003 09:59:36 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:38.003 09:59:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.003 09:59:36 -- common/autotest_common.sh@10 -- # set +x 00:12:38.262 09:59:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.262 09:59:36 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.262 09:59:36 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.262 09:59:36 -- common/autotest_common.sh@1187 -- # local i=0 00:12:38.262 09:59:36 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.262 09:59:36 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:38.262 09:59:36 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:40.794 09:59:38 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:40.794 09:59:38 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:40.794 09:59:38 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.794 09:59:38 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:40.794 09:59:38 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.794 09:59:38 -- common/autotest_common.sh@1197 -- # return 0 00:12:40.794 09:59:38 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.794 09:59:38 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.794 09:59:38 -- common/autotest_common.sh@1208 -- # local i=0 00:12:40.794 09:59:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:40.794 09:59:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.794 09:59:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:40.794 09:59:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.794 09:59:38 -- common/autotest_common.sh@1220 -- # return 0 00:12:40.794 09:59:38 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.794 09:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.794 09:59:38 -- common/autotest_common.sh@10 -- # set +x 00:12:40.794 09:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.794 09:59:38 -- target/rpc.sh@81 -- # seq 1 5 00:12:40.794 09:59:38 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.794 09:59:38 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.794 09:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.794 09:59:38 -- common/autotest_common.sh@10 -- # set +x 00:12:40.794 09:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.794 09:59:38 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.794 09:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.794 09:59:38 -- common/autotest_common.sh@10 -- # set +x 00:12:40.794 [2024-12-16 09:59:38.917224] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.794 09:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.794 09:59:38 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.794 09:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.794 09:59:38 -- common/autotest_common.sh@10 -- # set +x 00:12:40.794 09:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.794 09:59:38 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.794 09:59:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.794 09:59:38 -- common/autotest_common.sh@10 -- # set +x 00:12:40.794 09:59:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.794 09:59:38 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.794 09:59:39 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.794 09:59:39 -- common/autotest_common.sh@1187 -- # local i=0 00:12:40.794 09:59:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.794 09:59:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:40.794 09:59:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:42.696 09:59:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:42.696 09:59:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:42.696 09:59:41 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.696 09:59:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:42.696 09:59:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.696 09:59:41 -- common/autotest_common.sh@1197 -- # return 0 00:12:42.696 09:59:41 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.696 09:59:41 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.696 09:59:41 -- common/autotest_common.sh@1208 -- # local i=0 00:12:42.696 09:59:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:42.696 09:59:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.696 09:59:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:42.696 09:59:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.696 09:59:41 -- common/autotest_common.sh@1220 -- # return 0 00:12:42.696 09:59:41 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:42.696 09:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.696 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:12:42.696 09:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.696 09:59:41 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.696 09:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.696 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:12:42.696 09:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.696 09:59:41 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:42.696 09:59:41 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.696 09:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.696 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:12:42.696 09:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.696 09:59:41 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.696 09:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.696 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:12:42.696 [2024-12-16 09:59:41.215881] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.696 09:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.696 09:59:41 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:42.696 09:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.696 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:12:42.696 09:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.696 09:59:41 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.696 09:59:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.697 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:12:42.697 09:59:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.697 09:59:41 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.956 09:59:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.956 09:59:41 -- common/autotest_common.sh@1187 -- # local i=0 00:12:42.956 09:59:41 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.956 09:59:41 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:42.956 09:59:41 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:44.857 09:59:43 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:44.857 09:59:43 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:44.857 09:59:43 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.857 09:59:43 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:44.857 09:59:43 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.857 09:59:43 -- common/autotest_common.sh@1197 -- # return 0 00:12:44.857 09:59:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.857 09:59:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.857 09:59:43 -- common/autotest_common.sh@1208 -- # local i=0 00:12:44.857 09:59:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:44.857 09:59:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.116 09:59:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:45.116 09:59:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:45.116 09:59:43 -- common/autotest_common.sh@1220 -- # return 0 00:12:45.116 09:59:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:45.116 09:59:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.116 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.116 09:59:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.116 09:59:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.116 09:59:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.116 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.116 09:59:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.116 09:59:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:45.116 09:59:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.116 09:59:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.116 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.116 09:59:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.116 09:59:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.116 09:59:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.116 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.116 [2024-12-16 09:59:43.523080] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.116 09:59:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.116 09:59:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:45.116 09:59:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.116 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.116 09:59:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.116 09:59:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.116 09:59:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.116 09:59:43 -- common/autotest_common.sh@10 -- # set +x 00:12:45.116 09:59:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.116 09:59:43 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.116 09:59:43 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.116 09:59:43 -- common/autotest_common.sh@1187 -- # local i=0 00:12:45.116 09:59:43 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.116 09:59:43 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:45.116 09:59:43 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:47.649 09:59:45 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:47.649 09:59:45 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:47.649 09:59:45 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.649 09:59:45 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:47.649 09:59:45 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.649 09:59:45 -- common/autotest_common.sh@1197 -- # return 0 00:12:47.649 09:59:45 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.649 09:59:45 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.649 09:59:45 -- common/autotest_common.sh@1208 -- # local i=0 00:12:47.649 09:59:45 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:47.649 09:59:45 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.649 09:59:45 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.649 09:59:45 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:47.649 09:59:45 -- common/autotest_common.sh@1220 -- # return 0 00:12:47.649 09:59:45 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.649 09:59:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.649 09:59:45 -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 09:59:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.649 09:59:45 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.649 09:59:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.649 09:59:45 -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 09:59:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.649 09:59:45 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.649 09:59:45 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.649 09:59:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.649 09:59:45 -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 09:59:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.649 09:59:45 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.649 09:59:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.649 09:59:45 -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 [2024-12-16 09:59:45.934393] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.649 09:59:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.649 09:59:45 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.649 09:59:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.649 09:59:45 -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 09:59:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.649 09:59:45 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.649 09:59:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.649 09:59:45 -- common/autotest_common.sh@10 -- # set +x 00:12:47.649 09:59:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.649 09:59:45 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.649 09:59:46 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.649 09:59:46 -- common/autotest_common.sh@1187 -- # local i=0 00:12:47.649 09:59:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.649 09:59:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:47.649 09:59:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:49.553 09:59:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:49.553 09:59:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:49.553 09:59:48 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.553 09:59:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:49.553 09:59:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.553 09:59:48 -- common/autotest_common.sh@1197 -- # return 0 00:12:49.553 09:59:48 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.812 09:59:48 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.812 09:59:48 -- common/autotest_common.sh@1208 -- # local i=0 00:12:49.812 09:59:48 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:49.812 09:59:48 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.812 09:59:48 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:49.812 09:59:48 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.812 09:59:48 -- common/autotest_common.sh@1220 -- # return 0 00:12:49.812 09:59:48 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.812 09:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.812 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:12:49.812 09:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.812 09:59:48 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.812 09:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.812 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:12:49.812 09:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.812 09:59:48 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.812 09:59:48 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.812 09:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.812 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:12:49.812 09:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.812 09:59:48 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.812 09:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.812 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:12:49.812 [2024-12-16 09:59:48.241319] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.812 09:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.812 09:59:48 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.812 09:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.812 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:12:49.812 09:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.812 09:59:48 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.812 09:59:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.812 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:12:49.812 09:59:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.812 09:59:48 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.812 09:59:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.812 09:59:48 -- common/autotest_common.sh@1187 -- # local i=0 00:12:49.812 09:59:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.812 09:59:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:49.812 09:59:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:52.346 09:59:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:52.346 09:59:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:52.346 09:59:50 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.346 09:59:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:52.346 09:59:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.346 09:59:50 -- common/autotest_common.sh@1197 -- # return 0 00:12:52.346 09:59:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.346 09:59:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.346 09:59:50 -- common/autotest_common.sh@1208 -- # local i=0 00:12:52.346 09:59:50 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:52.346 09:59:50 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.346 09:59:50 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:52.346 09:59:50 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.346 09:59:50 -- common/autotest_common.sh@1220 -- # return 0 00:12:52.346 09:59:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@99 -- # seq 1 5 00:12:52.346 09:59:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.346 09:59:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 [2024-12-16 09:59:50.555994] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.346 09:59:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 [2024-12-16 09:59:50.608002] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.346 09:59:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 [2024-12-16 09:59:50.664064] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.346 09:59:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 [2024-12-16 09:59:50.712107] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.346 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.346 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.346 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.346 09:59:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.347 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.347 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.347 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.347 09:59:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.347 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.347 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.347 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.347 09:59:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.347 09:59:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.347 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.347 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.347 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.347 09:59:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.347 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.347 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.347 [2024-12-16 09:59:50.760159] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.347 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.347 09:59:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.347 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.347 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.347 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.347 09:59:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.347 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.347 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.347 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.347 09:59:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.347 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.347 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.347 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.347 09:59:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.347 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.347 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.347 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.347 09:59:50 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:52.347 09:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.347 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:12:52.347 09:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.347 09:59:50 -- target/rpc.sh@110 -- # stats='{ 00:12:52.347 "poll_groups": [ 00:12:52.347 { 00:12:52.347 "admin_qpairs": 2, 00:12:52.347 "completed_nvme_io": 66, 00:12:52.347 "current_admin_qpairs": 0, 00:12:52.347 "current_io_qpairs": 0, 00:12:52.347 "io_qpairs": 16, 00:12:52.347 "name": "nvmf_tgt_poll_group_0", 00:12:52.347 "pending_bdev_io": 0, 00:12:52.347 "transports": [ 00:12:52.347 { 00:12:52.347 "trtype": "TCP" 00:12:52.347 } 00:12:52.347 ] 00:12:52.347 }, 00:12:52.347 { 00:12:52.347 "admin_qpairs": 3, 00:12:52.347 "completed_nvme_io": 117, 00:12:52.347 "current_admin_qpairs": 0, 00:12:52.347 "current_io_qpairs": 0, 00:12:52.347 "io_qpairs": 17, 00:12:52.347 "name": "nvmf_tgt_poll_group_1", 00:12:52.347 "pending_bdev_io": 0, 00:12:52.347 "transports": [ 00:12:52.347 { 00:12:52.347 "trtype": "TCP" 00:12:52.347 } 00:12:52.347 ] 00:12:52.347 }, 00:12:52.347 { 00:12:52.347 "admin_qpairs": 1, 00:12:52.347 "completed_nvme_io": 168, 00:12:52.347 "current_admin_qpairs": 0, 00:12:52.347 "current_io_qpairs": 0, 00:12:52.347 "io_qpairs": 19, 00:12:52.347 "name": "nvmf_tgt_poll_group_2", 00:12:52.347 "pending_bdev_io": 0, 00:12:52.347 "transports": [ 00:12:52.347 { 00:12:52.347 "trtype": "TCP" 00:12:52.347 } 00:12:52.347 ] 00:12:52.347 }, 00:12:52.347 { 00:12:52.347 "admin_qpairs": 1, 00:12:52.347 "completed_nvme_io": 69, 00:12:52.347 "current_admin_qpairs": 0, 00:12:52.347 "current_io_qpairs": 0, 00:12:52.347 "io_qpairs": 18, 00:12:52.347 "name": "nvmf_tgt_poll_group_3", 00:12:52.347 "pending_bdev_io": 0, 00:12:52.347 "transports": [ 00:12:52.347 { 00:12:52.347 "trtype": "TCP" 00:12:52.347 } 00:12:52.347 ] 00:12:52.347 } 00:12:52.347 ], 00:12:52.347 "tick_rate": 2200000000 00:12:52.347 }' 00:12:52.347 09:59:50 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:52.347 09:59:50 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:52.347 09:59:50 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.347 09:59:50 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:52.347 09:59:50 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:52.347 09:59:50 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:52.347 09:59:50 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:52.347 09:59:50 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:52.347 09:59:50 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.347 09:59:50 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:52.347 09:59:50 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:52.347 09:59:50 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:52.347 09:59:50 -- target/rpc.sh@123 -- # nvmftestfini 00:12:52.347 09:59:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:52.347 09:59:50 -- nvmf/common.sh@116 -- # sync 00:12:52.606 09:59:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:52.606 09:59:50 -- nvmf/common.sh@119 -- # set +e 00:12:52.606 09:59:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:52.606 09:59:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:52.606 rmmod nvme_tcp 00:12:52.606 rmmod nvme_fabrics 00:12:52.606 rmmod nvme_keyring 00:12:52.606 09:59:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:52.606 09:59:51 -- nvmf/common.sh@123 -- # set -e 00:12:52.606 09:59:51 -- nvmf/common.sh@124 -- # return 0 00:12:52.606 09:59:51 -- nvmf/common.sh@477 -- # '[' -n 78031 ']' 00:12:52.606 09:59:51 -- nvmf/common.sh@478 -- # killprocess 78031 00:12:52.606 09:59:51 -- common/autotest_common.sh@936 -- # '[' -z 78031 ']' 00:12:52.606 09:59:51 -- common/autotest_common.sh@940 -- # kill -0 78031 00:12:52.606 09:59:51 -- common/autotest_common.sh@941 -- # uname 00:12:52.606 09:59:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:52.606 09:59:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78031 00:12:52.606 killing process with pid 78031 00:12:52.606 09:59:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:52.606 09:59:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:52.606 09:59:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78031' 00:12:52.606 09:59:51 -- common/autotest_common.sh@955 -- # kill 78031 00:12:52.606 09:59:51 -- common/autotest_common.sh@960 -- # wait 78031 00:12:52.870 09:59:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:52.870 09:59:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:52.870 09:59:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:52.870 09:59:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.870 09:59:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:52.870 09:59:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.870 09:59:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.870 09:59:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.870 09:59:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:52.870 00:12:52.870 real 0m18.971s 00:12:52.870 user 1m11.355s 00:12:52.870 sys 0m2.563s 00:12:52.870 ************************************ 00:12:52.870 END TEST nvmf_rpc 00:12:52.870 ************************************ 00:12:52.870 09:59:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:52.870 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:12:52.870 09:59:51 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:52.870 09:59:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:52.870 09:59:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.870 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:12:52.870 ************************************ 00:12:52.870 START TEST nvmf_invalid 00:12:52.870 ************************************ 00:12:52.870 09:59:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:52.870 * Looking for test storage... 00:12:52.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:52.870 09:59:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:52.870 09:59:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:52.870 09:59:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:52.870 09:59:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:52.870 09:59:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:52.870 09:59:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:52.870 09:59:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:52.870 09:59:51 -- scripts/common.sh@335 -- # IFS=.-: 00:12:52.870 09:59:51 -- scripts/common.sh@335 -- # read -ra ver1 00:12:52.870 09:59:51 -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.870 09:59:51 -- scripts/common.sh@336 -- # read -ra ver2 00:12:52.870 09:59:51 -- scripts/common.sh@337 -- # local 'op=<' 00:12:52.870 09:59:51 -- scripts/common.sh@339 -- # ver1_l=2 00:12:52.870 09:59:51 -- scripts/common.sh@340 -- # ver2_l=1 00:12:52.870 09:59:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:52.870 09:59:51 -- scripts/common.sh@343 -- # case "$op" in 00:12:52.870 09:59:51 -- scripts/common.sh@344 -- # : 1 00:12:52.870 09:59:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:52.870 09:59:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.870 09:59:51 -- scripts/common.sh@364 -- # decimal 1 00:12:53.154 09:59:51 -- scripts/common.sh@352 -- # local d=1 00:12:53.154 09:59:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.154 09:59:51 -- scripts/common.sh@354 -- # echo 1 00:12:53.154 09:59:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:53.154 09:59:51 -- scripts/common.sh@365 -- # decimal 2 00:12:53.154 09:59:51 -- scripts/common.sh@352 -- # local d=2 00:12:53.154 09:59:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.154 09:59:51 -- scripts/common.sh@354 -- # echo 2 00:12:53.154 09:59:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:53.154 09:59:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:53.154 09:59:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:53.154 09:59:51 -- scripts/common.sh@367 -- # return 0 00:12:53.154 09:59:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.154 09:59:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:53.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.154 --rc genhtml_branch_coverage=1 00:12:53.154 --rc genhtml_function_coverage=1 00:12:53.154 --rc genhtml_legend=1 00:12:53.154 --rc geninfo_all_blocks=1 00:12:53.154 --rc geninfo_unexecuted_blocks=1 00:12:53.154 00:12:53.154 ' 00:12:53.154 09:59:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:53.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.154 --rc genhtml_branch_coverage=1 00:12:53.154 --rc genhtml_function_coverage=1 00:12:53.154 --rc genhtml_legend=1 00:12:53.154 --rc geninfo_all_blocks=1 00:12:53.154 --rc geninfo_unexecuted_blocks=1 00:12:53.154 00:12:53.154 ' 00:12:53.154 09:59:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:53.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.154 --rc genhtml_branch_coverage=1 00:12:53.154 --rc genhtml_function_coverage=1 00:12:53.154 --rc genhtml_legend=1 00:12:53.154 --rc geninfo_all_blocks=1 00:12:53.154 --rc geninfo_unexecuted_blocks=1 00:12:53.154 00:12:53.154 ' 00:12:53.154 09:59:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:53.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.154 --rc genhtml_branch_coverage=1 00:12:53.154 --rc genhtml_function_coverage=1 00:12:53.154 --rc genhtml_legend=1 00:12:53.154 --rc geninfo_all_blocks=1 00:12:53.154 --rc geninfo_unexecuted_blocks=1 00:12:53.154 00:12:53.154 ' 00:12:53.154 09:59:51 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:53.154 09:59:51 -- nvmf/common.sh@7 -- # uname -s 00:12:53.154 09:59:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.154 09:59:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.154 09:59:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.154 09:59:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.154 09:59:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.154 09:59:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.154 09:59:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.154 09:59:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.154 09:59:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.154 09:59:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.154 09:59:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:12:53.154 09:59:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:12:53.154 09:59:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.154 09:59:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.154 09:59:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:53.154 09:59:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:53.154 09:59:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.154 09:59:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.154 09:59:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.154 09:59:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.154 09:59:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.154 09:59:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.154 09:59:51 -- paths/export.sh@5 -- # export PATH 00:12:53.154 09:59:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.154 09:59:51 -- nvmf/common.sh@46 -- # : 0 00:12:53.154 09:59:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:53.154 09:59:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:53.154 09:59:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:53.154 09:59:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.154 09:59:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.154 09:59:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:53.154 09:59:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:53.154 09:59:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:53.154 09:59:51 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:53.154 09:59:51 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:53.154 09:59:51 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:53.154 09:59:51 -- target/invalid.sh@14 -- # target=foobar 00:12:53.154 09:59:51 -- target/invalid.sh@16 -- # RANDOM=0 00:12:53.154 09:59:51 -- target/invalid.sh@34 -- # nvmftestinit 00:12:53.154 09:59:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:53.154 09:59:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.154 09:59:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:53.154 09:59:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:53.154 09:59:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:53.154 09:59:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.154 09:59:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.155 09:59:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.155 09:59:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:53.155 09:59:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:53.155 09:59:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:53.155 09:59:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:53.155 09:59:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:53.155 09:59:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:53.155 09:59:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.155 09:59:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:53.155 09:59:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:53.155 09:59:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:53.155 09:59:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:53.155 09:59:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:53.155 09:59:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:53.155 09:59:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.155 09:59:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:53.155 09:59:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:53.155 09:59:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:53.155 09:59:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:53.155 09:59:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:53.155 09:59:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:53.155 Cannot find device "nvmf_tgt_br" 00:12:53.155 09:59:51 -- nvmf/common.sh@154 -- # true 00:12:53.155 09:59:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:53.155 Cannot find device "nvmf_tgt_br2" 00:12:53.155 09:59:51 -- nvmf/common.sh@155 -- # true 00:12:53.155 09:59:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:53.155 09:59:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:53.155 Cannot find device "nvmf_tgt_br" 00:12:53.155 09:59:51 -- nvmf/common.sh@157 -- # true 00:12:53.155 09:59:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:53.155 Cannot find device "nvmf_tgt_br2" 00:12:53.155 09:59:51 -- nvmf/common.sh@158 -- # true 00:12:53.155 09:59:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:53.155 09:59:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:53.155 09:59:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:53.155 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:53.155 09:59:51 -- nvmf/common.sh@161 -- # true 00:12:53.155 09:59:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:53.155 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:53.155 09:59:51 -- nvmf/common.sh@162 -- # true 00:12:53.155 09:59:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:53.155 09:59:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:53.155 09:59:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:53.155 09:59:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:53.155 09:59:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:53.155 09:59:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:53.155 09:59:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:53.155 09:59:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:53.155 09:59:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:53.155 09:59:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:53.155 09:59:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:53.155 09:59:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:53.155 09:59:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:53.155 09:59:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:53.428 09:59:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:53.428 09:59:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:53.428 09:59:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:53.428 09:59:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:53.428 09:59:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:53.428 09:59:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:53.428 09:59:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:53.428 09:59:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:53.428 09:59:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:53.428 09:59:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:53.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:12:53.428 00:12:53.428 --- 10.0.0.2 ping statistics --- 00:12:53.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.428 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:53.428 09:59:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:53.428 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:53.428 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:12:53.428 00:12:53.428 --- 10.0.0.3 ping statistics --- 00:12:53.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.428 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:53.428 09:59:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:53.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:53.428 00:12:53.428 --- 10.0.0.1 ping statistics --- 00:12:53.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.428 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:53.428 09:59:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.428 09:59:51 -- nvmf/common.sh@421 -- # return 0 00:12:53.428 09:59:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:53.428 09:59:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.428 09:59:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:53.428 09:59:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:53.428 09:59:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.428 09:59:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:53.428 09:59:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:53.428 09:59:51 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:53.428 09:59:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:53.428 09:59:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:53.428 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:12:53.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.428 09:59:51 -- nvmf/common.sh@469 -- # nvmfpid=78551 00:12:53.428 09:59:51 -- nvmf/common.sh@470 -- # waitforlisten 78551 00:12:53.428 09:59:51 -- common/autotest_common.sh@829 -- # '[' -z 78551 ']' 00:12:53.428 09:59:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.428 09:59:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:53.428 09:59:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.428 09:59:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:53.428 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:12:53.428 09:59:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:53.428 [2024-12-16 09:59:51.932281] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:53.428 [2024-12-16 09:59:51.932596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.687 [2024-12-16 09:59:52.063268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.687 [2024-12-16 09:59:52.116727] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:53.687 [2024-12-16 09:59:52.117153] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.687 [2024-12-16 09:59:52.117202] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.687 [2024-12-16 09:59:52.117433] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.687 [2024-12-16 09:59:52.117607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.687 [2024-12-16 09:59:52.117715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.687 [2024-12-16 09:59:52.117849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.687 [2024-12-16 09:59:52.117854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.623 09:59:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:54.623 09:59:52 -- common/autotest_common.sh@862 -- # return 0 00:12:54.623 09:59:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:54.623 09:59:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:54.623 09:59:52 -- common/autotest_common.sh@10 -- # set +x 00:12:54.623 09:59:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.623 09:59:52 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:54.623 09:59:52 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29961 00:12:54.882 [2024-12-16 09:59:53.256407] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:54.882 09:59:53 -- target/invalid.sh@40 -- # out='2024/12/16 09:59:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29961 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:54.882 request: 00:12:54.882 { 00:12:54.882 "method": "nvmf_create_subsystem", 00:12:54.882 "params": { 00:12:54.882 "nqn": "nqn.2016-06.io.spdk:cnode29961", 00:12:54.882 "tgt_name": "foobar" 00:12:54.882 } 00:12:54.882 } 00:12:54.882 Got JSON-RPC error response 00:12:54.882 GoRPCClient: error on JSON-RPC call' 00:12:54.882 09:59:53 -- target/invalid.sh@41 -- # [[ 2024/12/16 09:59:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29961 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:54.882 request: 00:12:54.882 { 00:12:54.882 "method": "nvmf_create_subsystem", 00:12:54.882 "params": { 00:12:54.882 "nqn": "nqn.2016-06.io.spdk:cnode29961", 00:12:54.882 "tgt_name": "foobar" 00:12:54.882 } 00:12:54.882 } 00:12:54.882 Got JSON-RPC error response 00:12:54.882 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:54.882 09:59:53 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:54.882 09:59:53 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18135 00:12:55.141 [2024-12-16 09:59:53.572650] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18135: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:55.141 09:59:53 -- target/invalid.sh@45 -- # out='2024/12/16 09:59:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18135 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:55.141 request: 00:12:55.141 { 00:12:55.141 "method": "nvmf_create_subsystem", 00:12:55.141 "params": { 00:12:55.141 "nqn": "nqn.2016-06.io.spdk:cnode18135", 00:12:55.141 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:55.141 } 00:12:55.141 } 00:12:55.141 Got JSON-RPC error response 00:12:55.141 GoRPCClient: error on JSON-RPC call' 00:12:55.141 09:59:53 -- target/invalid.sh@46 -- # [[ 2024/12/16 09:59:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18135 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:55.141 request: 00:12:55.141 { 00:12:55.141 "method": "nvmf_create_subsystem", 00:12:55.141 "params": { 00:12:55.141 "nqn": "nqn.2016-06.io.spdk:cnode18135", 00:12:55.141 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:55.141 } 00:12:55.141 } 00:12:55.141 Got JSON-RPC error response 00:12:55.141 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:55.141 09:59:53 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:55.141 09:59:53 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode22217 00:12:55.399 [2024-12-16 09:59:53.872896] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22217: invalid model number 'SPDK_Controller' 00:12:55.399 09:59:53 -- target/invalid.sh@50 -- # out='2024/12/16 09:59:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode22217], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:55.399 request: 00:12:55.399 { 00:12:55.399 "method": "nvmf_create_subsystem", 00:12:55.399 "params": { 00:12:55.400 "nqn": "nqn.2016-06.io.spdk:cnode22217", 00:12:55.400 "model_number": "SPDK_Controller\u001f" 00:12:55.400 } 00:12:55.400 } 00:12:55.400 Got JSON-RPC error response 00:12:55.400 GoRPCClient: error on JSON-RPC call' 00:12:55.400 09:59:53 -- target/invalid.sh@51 -- # [[ 2024/12/16 09:59:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode22217], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:55.400 request: 00:12:55.400 { 00:12:55.400 "method": "nvmf_create_subsystem", 00:12:55.400 "params": { 00:12:55.400 "nqn": "nqn.2016-06.io.spdk:cnode22217", 00:12:55.400 "model_number": "SPDK_Controller\u001f" 00:12:55.400 } 00:12:55.400 } 00:12:55.400 Got JSON-RPC error response 00:12:55.400 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:55.400 09:59:53 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:55.400 09:59:53 -- target/invalid.sh@19 -- # local length=21 ll 00:12:55.400 09:59:53 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:55.400 09:59:53 -- target/invalid.sh@21 -- # local chars 00:12:55.400 09:59:53 -- target/invalid.sh@22 -- # local string 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 98 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=b 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 71 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=G 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 118 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=v 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 89 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=Y 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 51 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=3 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 38 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+='&' 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 100 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=d 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 126 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+='~' 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 44 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=, 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 77 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=M 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 95 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=_ 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 93 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=']' 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 34 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+='"' 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 57 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=9 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 104 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=h 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 122 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=z 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 120 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=x 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 94 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+='^' 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 95 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=_ 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # printf %x 106 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:55.400 09:59:53 -- target/invalid.sh@25 -- # string+=j 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:54 -- target/invalid.sh@25 -- # printf %x 99 00:12:55.400 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:55.400 09:59:54 -- target/invalid.sh@25 -- # string+=c 00:12:55.400 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.400 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.400 09:59:54 -- target/invalid.sh@28 -- # [[ b == \- ]] 00:12:55.400 09:59:54 -- target/invalid.sh@31 -- # echo 'bGvY3&d~,M_]"9hzx^_jc' 00:12:55.400 09:59:54 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'bGvY3&d~,M_]"9hzx^_jc' nqn.2016-06.io.spdk:cnode337 00:12:55.659 [2024-12-16 09:59:54.277220] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode337: invalid serial number 'bGvY3&d~,M_]"9hzx^_jc' 00:12:55.918 09:59:54 -- target/invalid.sh@54 -- # out='2024/12/16 09:59:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode337 serial_number:bGvY3&d~,M_]"9hzx^_jc], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN bGvY3&d~,M_]"9hzx^_jc 00:12:55.918 request: 00:12:55.918 { 00:12:55.918 "method": "nvmf_create_subsystem", 00:12:55.918 "params": { 00:12:55.918 "nqn": "nqn.2016-06.io.spdk:cnode337", 00:12:55.918 "serial_number": "bGvY3&d~,M_]\"9hzx^_jc" 00:12:55.918 } 00:12:55.918 } 00:12:55.918 Got JSON-RPC error response 00:12:55.918 GoRPCClient: error on JSON-RPC call' 00:12:55.918 09:59:54 -- target/invalid.sh@55 -- # [[ 2024/12/16 09:59:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode337 serial_number:bGvY3&d~,M_]"9hzx^_jc], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN bGvY3&d~,M_]"9hzx^_jc 00:12:55.918 request: 00:12:55.918 { 00:12:55.918 "method": "nvmf_create_subsystem", 00:12:55.918 "params": { 00:12:55.918 "nqn": "nqn.2016-06.io.spdk:cnode337", 00:12:55.918 "serial_number": "bGvY3&d~,M_]\"9hzx^_jc" 00:12:55.918 } 00:12:55.918 } 00:12:55.918 Got JSON-RPC error response 00:12:55.918 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:55.918 09:59:54 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:55.918 09:59:54 -- target/invalid.sh@19 -- # local length=41 ll 00:12:55.918 09:59:54 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:55.918 09:59:54 -- target/invalid.sh@21 -- # local chars 00:12:55.918 09:59:54 -- target/invalid.sh@22 -- # local string 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # printf %x 91 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # string+='[' 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # printf %x 35 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # string+='#' 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # printf %x 94 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # string+='^' 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # printf %x 123 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # string+='{' 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # printf %x 113 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # string+=q 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # printf %x 106 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # string+=j 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # printf %x 107 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # string+=k 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # printf %x 116 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # string+=t 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # printf %x 67 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # string+=C 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # printf %x 97 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # string+=a 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # printf %x 37 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:55.918 09:59:54 -- target/invalid.sh@25 -- # string+=% 00:12:55.918 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 83 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=S 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 79 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=O 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 71 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=G 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 44 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=, 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 93 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=']' 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 126 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+='~' 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 34 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+='"' 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 40 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+='(' 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 76 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=L 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 72 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=H 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 107 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=k 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 85 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=U 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 77 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=M 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 88 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=X 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 106 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=j 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 77 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=M 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 81 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=Q 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 59 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=';' 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 40 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+='(' 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 79 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=O 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 77 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=M 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 76 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=L 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 44 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=, 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 41 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=')' 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 87 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=W 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 61 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+== 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 97 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=a 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 37 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=% 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 86 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=V 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # printf %x 118 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:55.919 09:59:54 -- target/invalid.sh@25 -- # string+=v 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.919 09:59:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.919 09:59:54 -- target/invalid.sh@28 -- # [[ [ == \- ]] 00:12:55.919 09:59:54 -- target/invalid.sh@31 -- # echo '[#^{qjktCa%SOG,]~"(LHkUMXjMQ;(OML,)W=a%Vv' 00:12:55.919 09:59:54 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '[#^{qjktCa%SOG,]~"(LHkUMXjMQ;(OML,)W=a%Vv' nqn.2016-06.io.spdk:cnode23200 00:12:56.178 [2024-12-16 09:59:54.789691] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23200: invalid model number '[#^{qjktCa%SOG,]~"(LHkUMXjMQ;(OML,)W=a%Vv' 00:12:56.437 09:59:54 -- target/invalid.sh@58 -- # out='2024/12/16 09:59:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:[#^{qjktCa%SOG,]~"(LHkUMXjMQ;(OML,)W=a%Vv nqn:nqn.2016-06.io.spdk:cnode23200], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN [#^{qjktCa%SOG,]~"(LHkUMXjMQ;(OML,)W=a%Vv 00:12:56.437 request: 00:12:56.437 { 00:12:56.437 "method": "nvmf_create_subsystem", 00:12:56.437 "params": { 00:12:56.437 "nqn": "nqn.2016-06.io.spdk:cnode23200", 00:12:56.437 "model_number": "[#^{qjktCa%SOG,]~\"(LHkUMXjMQ;(OML,)W=a%Vv" 00:12:56.437 } 00:12:56.437 } 00:12:56.437 Got JSON-RPC error response 00:12:56.437 GoRPCClient: error on JSON-RPC call' 00:12:56.437 09:59:54 -- target/invalid.sh@59 -- # [[ 2024/12/16 09:59:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:[#^{qjktCa%SOG,]~"(LHkUMXjMQ;(OML,)W=a%Vv nqn:nqn.2016-06.io.spdk:cnode23200], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN [#^{qjktCa%SOG,]~"(LHkUMXjMQ;(OML,)W=a%Vv 00:12:56.437 request: 00:12:56.437 { 00:12:56.437 "method": "nvmf_create_subsystem", 00:12:56.437 "params": { 00:12:56.437 "nqn": "nqn.2016-06.io.spdk:cnode23200", 00:12:56.437 "model_number": "[#^{qjktCa%SOG,]~\"(LHkUMXjMQ;(OML,)W=a%Vv" 00:12:56.437 } 00:12:56.437 } 00:12:56.437 Got JSON-RPC error response 00:12:56.437 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:56.437 09:59:54 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:56.695 [2024-12-16 09:59:55.089994] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.695 09:59:55 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:56.953 09:59:55 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:56.953 09:59:55 -- target/invalid.sh@67 -- # echo '' 00:12:56.953 09:59:55 -- target/invalid.sh@67 -- # head -n 1 00:12:56.953 09:59:55 -- target/invalid.sh@67 -- # IP= 00:12:56.953 09:59:55 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:57.211 [2024-12-16 09:59:55.710088] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:57.211 09:59:55 -- target/invalid.sh@69 -- # out='2024/12/16 09:59:55 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:57.211 request: 00:12:57.211 { 00:12:57.211 "method": "nvmf_subsystem_remove_listener", 00:12:57.211 "params": { 00:12:57.211 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:57.211 "listen_address": { 00:12:57.211 "trtype": "tcp", 00:12:57.211 "traddr": "", 00:12:57.211 "trsvcid": "4421" 00:12:57.211 } 00:12:57.211 } 00:12:57.211 } 00:12:57.211 Got JSON-RPC error response 00:12:57.211 GoRPCClient: error on JSON-RPC call' 00:12:57.211 09:59:55 -- target/invalid.sh@70 -- # [[ 2024/12/16 09:59:55 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:57.211 request: 00:12:57.211 { 00:12:57.211 "method": "nvmf_subsystem_remove_listener", 00:12:57.211 "params": { 00:12:57.211 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:57.211 "listen_address": { 00:12:57.211 "trtype": "tcp", 00:12:57.211 "traddr": "", 00:12:57.211 "trsvcid": "4421" 00:12:57.211 } 00:12:57.211 } 00:12:57.211 } 00:12:57.211 Got JSON-RPC error response 00:12:57.211 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:57.211 09:59:55 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7539 -i 0 00:12:57.470 [2024-12-16 09:59:56.046325] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7539: invalid cntlid range [0-65519] 00:12:57.470 09:59:56 -- target/invalid.sh@73 -- # out='2024/12/16 09:59:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode7539], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:57.470 request: 00:12:57.470 { 00:12:57.470 "method": "nvmf_create_subsystem", 00:12:57.470 "params": { 00:12:57.470 "nqn": "nqn.2016-06.io.spdk:cnode7539", 00:12:57.470 "min_cntlid": 0 00:12:57.470 } 00:12:57.470 } 00:12:57.470 Got JSON-RPC error response 00:12:57.470 GoRPCClient: error on JSON-RPC call' 00:12:57.470 09:59:56 -- target/invalid.sh@74 -- # [[ 2024/12/16 09:59:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode7539], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:57.470 request: 00:12:57.470 { 00:12:57.470 "method": "nvmf_create_subsystem", 00:12:57.470 "params": { 00:12:57.470 "nqn": "nqn.2016-06.io.spdk:cnode7539", 00:12:57.470 "min_cntlid": 0 00:12:57.470 } 00:12:57.470 } 00:12:57.470 Got JSON-RPC error response 00:12:57.470 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.470 09:59:56 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26143 -i 65520 00:12:57.729 [2024-12-16 09:59:56.278531] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26143: invalid cntlid range [65520-65519] 00:12:57.729 09:59:56 -- target/invalid.sh@75 -- # out='2024/12/16 09:59:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode26143], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:57.729 request: 00:12:57.729 { 00:12:57.729 "method": "nvmf_create_subsystem", 00:12:57.729 "params": { 00:12:57.729 "nqn": "nqn.2016-06.io.spdk:cnode26143", 00:12:57.729 "min_cntlid": 65520 00:12:57.729 } 00:12:57.729 } 00:12:57.729 Got JSON-RPC error response 00:12:57.729 GoRPCClient: error on JSON-RPC call' 00:12:57.729 09:59:56 -- target/invalid.sh@76 -- # [[ 2024/12/16 09:59:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode26143], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:57.729 request: 00:12:57.729 { 00:12:57.729 "method": "nvmf_create_subsystem", 00:12:57.729 "params": { 00:12:57.729 "nqn": "nqn.2016-06.io.spdk:cnode26143", 00:12:57.729 "min_cntlid": 65520 00:12:57.729 } 00:12:57.729 } 00:12:57.729 Got JSON-RPC error response 00:12:57.729 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.729 09:59:56 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31663 -I 0 00:12:57.988 [2024-12-16 09:59:56.554834] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31663: invalid cntlid range [1-0] 00:12:57.988 09:59:56 -- target/invalid.sh@77 -- # out='2024/12/16 09:59:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode31663], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:57.988 request: 00:12:57.988 { 00:12:57.988 "method": "nvmf_create_subsystem", 00:12:57.988 "params": { 00:12:57.988 "nqn": "nqn.2016-06.io.spdk:cnode31663", 00:12:57.988 "max_cntlid": 0 00:12:57.988 } 00:12:57.988 } 00:12:57.988 Got JSON-RPC error response 00:12:57.988 GoRPCClient: error on JSON-RPC call' 00:12:57.988 09:59:56 -- target/invalid.sh@78 -- # [[ 2024/12/16 09:59:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode31663], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:57.988 request: 00:12:57.988 { 00:12:57.988 "method": "nvmf_create_subsystem", 00:12:57.988 "params": { 00:12:57.988 "nqn": "nqn.2016-06.io.spdk:cnode31663", 00:12:57.988 "max_cntlid": 0 00:12:57.988 } 00:12:57.988 } 00:12:57.988 Got JSON-RPC error response 00:12:57.988 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.988 09:59:56 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14464 -I 65520 00:12:58.247 [2024-12-16 09:59:56.783056] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14464: invalid cntlid range [1-65520] 00:12:58.247 09:59:56 -- target/invalid.sh@79 -- # out='2024/12/16 09:59:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode14464], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:58.247 request: 00:12:58.247 { 00:12:58.247 "method": "nvmf_create_subsystem", 00:12:58.247 "params": { 00:12:58.247 "nqn": "nqn.2016-06.io.spdk:cnode14464", 00:12:58.247 "max_cntlid": 65520 00:12:58.247 } 00:12:58.247 } 00:12:58.247 Got JSON-RPC error response 00:12:58.247 GoRPCClient: error on JSON-RPC call' 00:12:58.247 09:59:56 -- target/invalid.sh@80 -- # [[ 2024/12/16 09:59:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode14464], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:58.247 request: 00:12:58.247 { 00:12:58.247 "method": "nvmf_create_subsystem", 00:12:58.247 "params": { 00:12:58.247 "nqn": "nqn.2016-06.io.spdk:cnode14464", 00:12:58.247 "max_cntlid": 65520 00:12:58.247 } 00:12:58.247 } 00:12:58.247 Got JSON-RPC error response 00:12:58.247 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:58.247 09:59:56 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13311 -i 6 -I 5 00:12:58.506 [2024-12-16 09:59:57.071308] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13311: invalid cntlid range [6-5] 00:12:58.506 09:59:57 -- target/invalid.sh@83 -- # out='2024/12/16 09:59:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode13311], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:58.506 request: 00:12:58.506 { 00:12:58.506 "method": "nvmf_create_subsystem", 00:12:58.506 "params": { 00:12:58.506 "nqn": "nqn.2016-06.io.spdk:cnode13311", 00:12:58.506 "min_cntlid": 6, 00:12:58.506 "max_cntlid": 5 00:12:58.506 } 00:12:58.506 } 00:12:58.506 Got JSON-RPC error response 00:12:58.506 GoRPCClient: error on JSON-RPC call' 00:12:58.506 09:59:57 -- target/invalid.sh@84 -- # [[ 2024/12/16 09:59:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode13311], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:58.506 request: 00:12:58.506 { 00:12:58.506 "method": "nvmf_create_subsystem", 00:12:58.506 "params": { 00:12:58.506 "nqn": "nqn.2016-06.io.spdk:cnode13311", 00:12:58.506 "min_cntlid": 6, 00:12:58.506 "max_cntlid": 5 00:12:58.506 } 00:12:58.506 } 00:12:58.506 Got JSON-RPC error response 00:12:58.506 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:58.506 09:59:57 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:58.765 09:59:57 -- target/invalid.sh@87 -- # out='request: 00:12:58.765 { 00:12:58.765 "name": "foobar", 00:12:58.765 "method": "nvmf_delete_target", 00:12:58.765 "req_id": 1 00:12:58.765 } 00:12:58.765 Got JSON-RPC error response 00:12:58.765 response: 00:12:58.765 { 00:12:58.765 "code": -32602, 00:12:58.765 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:58.765 }' 00:12:58.765 09:59:57 -- target/invalid.sh@88 -- # [[ request: 00:12:58.765 { 00:12:58.765 "name": "foobar", 00:12:58.765 "method": "nvmf_delete_target", 00:12:58.765 "req_id": 1 00:12:58.765 } 00:12:58.765 Got JSON-RPC error response 00:12:58.765 response: 00:12:58.765 { 00:12:58.765 "code": -32602, 00:12:58.765 "message": "The specified target doesn't exist, cannot delete it." 00:12:58.765 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:58.765 09:59:57 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:58.765 09:59:57 -- target/invalid.sh@91 -- # nvmftestfini 00:12:58.765 09:59:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:58.765 09:59:57 -- nvmf/common.sh@116 -- # sync 00:12:58.765 09:59:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:58.765 09:59:57 -- nvmf/common.sh@119 -- # set +e 00:12:58.765 09:59:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:58.765 09:59:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:58.765 rmmod nvme_tcp 00:12:58.765 rmmod nvme_fabrics 00:12:58.765 rmmod nvme_keyring 00:12:58.765 09:59:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:58.765 09:59:57 -- nvmf/common.sh@123 -- # set -e 00:12:58.765 09:59:57 -- nvmf/common.sh@124 -- # return 0 00:12:58.765 09:59:57 -- nvmf/common.sh@477 -- # '[' -n 78551 ']' 00:12:58.765 09:59:57 -- nvmf/common.sh@478 -- # killprocess 78551 00:12:58.765 09:59:57 -- common/autotest_common.sh@936 -- # '[' -z 78551 ']' 00:12:58.765 09:59:57 -- common/autotest_common.sh@940 -- # kill -0 78551 00:12:58.765 09:59:57 -- common/autotest_common.sh@941 -- # uname 00:12:58.765 09:59:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:58.765 09:59:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78551 00:12:58.765 killing process with pid 78551 00:12:58.765 09:59:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:58.765 09:59:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:58.765 09:59:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78551' 00:12:58.765 09:59:57 -- common/autotest_common.sh@955 -- # kill 78551 00:12:58.765 09:59:57 -- common/autotest_common.sh@960 -- # wait 78551 00:12:59.024 09:59:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:59.024 09:59:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:59.024 09:59:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:59.024 09:59:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.024 09:59:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:59.024 09:59:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.024 09:59:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.024 09:59:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.024 09:59:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:59.024 ************************************ 00:12:59.024 END TEST nvmf_invalid 00:12:59.024 ************************************ 00:12:59.024 00:12:59.024 real 0m6.240s 00:12:59.024 user 0m25.383s 00:12:59.024 sys 0m1.301s 00:12:59.024 09:59:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:59.024 09:59:57 -- common/autotest_common.sh@10 -- # set +x 00:12:59.024 09:59:57 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:59.024 09:59:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:59.024 09:59:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:59.024 09:59:57 -- common/autotest_common.sh@10 -- # set +x 00:12:59.024 ************************************ 00:12:59.024 START TEST nvmf_abort 00:12:59.024 ************************************ 00:12:59.024 09:59:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:59.284 * Looking for test storage... 00:12:59.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:59.284 09:59:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:59.284 09:59:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:59.284 09:59:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:59.284 09:59:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:59.284 09:59:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:59.284 09:59:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:59.284 09:59:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:59.284 09:59:57 -- scripts/common.sh@335 -- # IFS=.-: 00:12:59.284 09:59:57 -- scripts/common.sh@335 -- # read -ra ver1 00:12:59.284 09:59:57 -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.284 09:59:57 -- scripts/common.sh@336 -- # read -ra ver2 00:12:59.284 09:59:57 -- scripts/common.sh@337 -- # local 'op=<' 00:12:59.284 09:59:57 -- scripts/common.sh@339 -- # ver1_l=2 00:12:59.284 09:59:57 -- scripts/common.sh@340 -- # ver2_l=1 00:12:59.284 09:59:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:59.284 09:59:57 -- scripts/common.sh@343 -- # case "$op" in 00:12:59.284 09:59:57 -- scripts/common.sh@344 -- # : 1 00:12:59.284 09:59:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:59.284 09:59:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.284 09:59:57 -- scripts/common.sh@364 -- # decimal 1 00:12:59.284 09:59:57 -- scripts/common.sh@352 -- # local d=1 00:12:59.284 09:59:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.284 09:59:57 -- scripts/common.sh@354 -- # echo 1 00:12:59.284 09:59:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:59.284 09:59:57 -- scripts/common.sh@365 -- # decimal 2 00:12:59.284 09:59:57 -- scripts/common.sh@352 -- # local d=2 00:12:59.284 09:59:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.284 09:59:57 -- scripts/common.sh@354 -- # echo 2 00:12:59.284 09:59:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:59.284 09:59:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:59.284 09:59:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:59.284 09:59:57 -- scripts/common.sh@367 -- # return 0 00:12:59.284 09:59:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.284 09:59:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:59.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.284 --rc genhtml_branch_coverage=1 00:12:59.284 --rc genhtml_function_coverage=1 00:12:59.284 --rc genhtml_legend=1 00:12:59.284 --rc geninfo_all_blocks=1 00:12:59.284 --rc geninfo_unexecuted_blocks=1 00:12:59.284 00:12:59.284 ' 00:12:59.284 09:59:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:59.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.284 --rc genhtml_branch_coverage=1 00:12:59.284 --rc genhtml_function_coverage=1 00:12:59.284 --rc genhtml_legend=1 00:12:59.284 --rc geninfo_all_blocks=1 00:12:59.284 --rc geninfo_unexecuted_blocks=1 00:12:59.284 00:12:59.284 ' 00:12:59.284 09:59:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:59.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.284 --rc genhtml_branch_coverage=1 00:12:59.284 --rc genhtml_function_coverage=1 00:12:59.284 --rc genhtml_legend=1 00:12:59.284 --rc geninfo_all_blocks=1 00:12:59.284 --rc geninfo_unexecuted_blocks=1 00:12:59.284 00:12:59.284 ' 00:12:59.284 09:59:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:59.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.285 --rc genhtml_branch_coverage=1 00:12:59.285 --rc genhtml_function_coverage=1 00:12:59.285 --rc genhtml_legend=1 00:12:59.285 --rc geninfo_all_blocks=1 00:12:59.285 --rc geninfo_unexecuted_blocks=1 00:12:59.285 00:12:59.285 ' 00:12:59.285 09:59:57 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:59.285 09:59:57 -- nvmf/common.sh@7 -- # uname -s 00:12:59.285 09:59:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.285 09:59:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.285 09:59:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.285 09:59:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.285 09:59:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.285 09:59:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.285 09:59:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.285 09:59:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.285 09:59:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.285 09:59:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.285 09:59:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:12:59.285 09:59:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:12:59.285 09:59:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.285 09:59:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.285 09:59:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:59.285 09:59:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:59.285 09:59:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.285 09:59:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.285 09:59:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.285 09:59:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.285 09:59:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.285 09:59:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.285 09:59:57 -- paths/export.sh@5 -- # export PATH 00:12:59.285 09:59:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.285 09:59:57 -- nvmf/common.sh@46 -- # : 0 00:12:59.285 09:59:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:59.285 09:59:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:59.285 09:59:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:59.285 09:59:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.285 09:59:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.285 09:59:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:59.285 09:59:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:59.285 09:59:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:59.285 09:59:57 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:59.285 09:59:57 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:59.285 09:59:57 -- target/abort.sh@14 -- # nvmftestinit 00:12:59.285 09:59:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:59.285 09:59:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.285 09:59:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:59.285 09:59:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:59.285 09:59:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:59.285 09:59:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.285 09:59:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.285 09:59:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.285 09:59:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:59.285 09:59:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:59.285 09:59:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:59.285 09:59:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:59.285 09:59:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:59.285 09:59:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:59.285 09:59:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.285 09:59:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.285 09:59:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:59.285 09:59:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:59.285 09:59:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:59.285 09:59:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:59.285 09:59:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:59.285 09:59:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.285 09:59:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:59.285 09:59:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:59.285 09:59:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:59.285 09:59:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:59.285 09:59:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:59.285 09:59:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:59.285 Cannot find device "nvmf_tgt_br" 00:12:59.285 09:59:57 -- nvmf/common.sh@154 -- # true 00:12:59.285 09:59:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:59.285 Cannot find device "nvmf_tgt_br2" 00:12:59.285 09:59:57 -- nvmf/common.sh@155 -- # true 00:12:59.285 09:59:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:59.285 09:59:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:59.285 Cannot find device "nvmf_tgt_br" 00:12:59.285 09:59:57 -- nvmf/common.sh@157 -- # true 00:12:59.285 09:59:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:59.285 Cannot find device "nvmf_tgt_br2" 00:12:59.285 09:59:57 -- nvmf/common.sh@158 -- # true 00:12:59.285 09:59:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:59.544 09:59:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:59.544 09:59:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:59.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.544 09:59:57 -- nvmf/common.sh@161 -- # true 00:12:59.544 09:59:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:59.544 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.544 09:59:57 -- nvmf/common.sh@162 -- # true 00:12:59.544 09:59:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:59.544 09:59:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:59.544 09:59:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:59.544 09:59:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:59.544 09:59:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:59.544 09:59:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:59.544 09:59:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:59.544 09:59:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:59.544 09:59:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:59.544 09:59:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:59.544 09:59:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:59.544 09:59:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:59.544 09:59:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:59.544 09:59:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:59.544 09:59:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:59.544 09:59:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:59.544 09:59:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:59.544 09:59:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:59.544 09:59:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:59.544 09:59:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:59.544 09:59:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:59.544 09:59:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:59.544 09:59:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:59.544 09:59:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:59.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:12:59.544 00:12:59.544 --- 10.0.0.2 ping statistics --- 00:12:59.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.544 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:59.544 09:59:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:59.544 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:59.544 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:12:59.544 00:12:59.544 --- 10.0.0.3 ping statistics --- 00:12:59.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.544 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:59.544 09:59:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:59.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:12:59.544 00:12:59.544 --- 10.0.0.1 ping statistics --- 00:12:59.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.544 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:59.544 09:59:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.545 09:59:58 -- nvmf/common.sh@421 -- # return 0 00:12:59.545 09:59:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:59.545 09:59:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.545 09:59:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:59.545 09:59:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:59.545 09:59:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.545 09:59:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:59.545 09:59:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:59.545 09:59:58 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:59.545 09:59:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:59.545 09:59:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:59.545 09:59:58 -- common/autotest_common.sh@10 -- # set +x 00:12:59.545 09:59:58 -- nvmf/common.sh@469 -- # nvmfpid=79070 00:12:59.545 09:59:58 -- nvmf/common.sh@470 -- # waitforlisten 79070 00:12:59.545 09:59:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:59.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.545 09:59:58 -- common/autotest_common.sh@829 -- # '[' -z 79070 ']' 00:12:59.545 09:59:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.545 09:59:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.545 09:59:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.545 09:59:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.545 09:59:58 -- common/autotest_common.sh@10 -- # set +x 00:12:59.803 [2024-12-16 09:59:58.205975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:59.803 [2024-12-16 09:59:58.206066] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.803 [2024-12-16 09:59:58.347176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:59.803 [2024-12-16 09:59:58.404567] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:59.803 [2024-12-16 09:59:58.404991] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.803 [2024-12-16 09:59:58.405043] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.803 [2024-12-16 09:59:58.405271] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.803 [2024-12-16 09:59:58.405434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.803 [2024-12-16 09:59:58.405969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.803 [2024-12-16 09:59:58.406014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.740 09:59:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.740 09:59:59 -- common/autotest_common.sh@862 -- # return 0 00:13:00.740 09:59:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:00.740 09:59:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:00.740 09:59:59 -- common/autotest_common.sh@10 -- # set +x 00:13:00.740 09:59:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.740 09:59:59 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:00.740 09:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.740 09:59:59 -- common/autotest_common.sh@10 -- # set +x 00:13:00.740 [2024-12-16 09:59:59.195712] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.740 09:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.740 09:59:59 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:00.740 09:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.740 09:59:59 -- common/autotest_common.sh@10 -- # set +x 00:13:00.740 Malloc0 00:13:00.740 09:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.740 09:59:59 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:00.740 09:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.740 09:59:59 -- common/autotest_common.sh@10 -- # set +x 00:13:00.740 Delay0 00:13:00.740 09:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.740 09:59:59 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:00.740 09:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.740 09:59:59 -- common/autotest_common.sh@10 -- # set +x 00:13:00.740 09:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.740 09:59:59 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:00.740 09:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.740 09:59:59 -- common/autotest_common.sh@10 -- # set +x 00:13:00.740 09:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.740 09:59:59 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:00.740 09:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.740 09:59:59 -- common/autotest_common.sh@10 -- # set +x 00:13:00.740 [2024-12-16 09:59:59.263633] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.740 09:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.740 09:59:59 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:00.740 09:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.740 09:59:59 -- common/autotest_common.sh@10 -- # set +x 00:13:00.740 09:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.740 09:59:59 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:00.998 [2024-12-16 09:59:59.433442] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:02.899 Initializing NVMe Controllers 00:13:02.899 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:02.899 controller IO queue size 128 less than required 00:13:02.899 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:02.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:02.899 Initialization complete. Launching workers. 00:13:02.899 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 35831 00:13:02.899 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 35892, failed to submit 62 00:13:02.899 success 35831, unsuccess 61, failed 0 00:13:02.899 10:00:01 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:02.899 10:00:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.899 10:00:01 -- common/autotest_common.sh@10 -- # set +x 00:13:02.899 10:00:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.899 10:00:01 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:02.899 10:00:01 -- target/abort.sh@38 -- # nvmftestfini 00:13:02.899 10:00:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:02.899 10:00:01 -- nvmf/common.sh@116 -- # sync 00:13:03.157 10:00:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:03.157 10:00:01 -- nvmf/common.sh@119 -- # set +e 00:13:03.157 10:00:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:03.157 10:00:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:03.157 rmmod nvme_tcp 00:13:03.157 rmmod nvme_fabrics 00:13:03.157 rmmod nvme_keyring 00:13:03.157 10:00:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:03.157 10:00:01 -- nvmf/common.sh@123 -- # set -e 00:13:03.157 10:00:01 -- nvmf/common.sh@124 -- # return 0 00:13:03.157 10:00:01 -- nvmf/common.sh@477 -- # '[' -n 79070 ']' 00:13:03.157 10:00:01 -- nvmf/common.sh@478 -- # killprocess 79070 00:13:03.157 10:00:01 -- common/autotest_common.sh@936 -- # '[' -z 79070 ']' 00:13:03.157 10:00:01 -- common/autotest_common.sh@940 -- # kill -0 79070 00:13:03.157 10:00:01 -- common/autotest_common.sh@941 -- # uname 00:13:03.157 10:00:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:03.157 10:00:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79070 00:13:03.157 killing process with pid 79070 00:13:03.157 10:00:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:03.157 10:00:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:03.157 10:00:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79070' 00:13:03.157 10:00:01 -- common/autotest_common.sh@955 -- # kill 79070 00:13:03.157 10:00:01 -- common/autotest_common.sh@960 -- # wait 79070 00:13:03.416 10:00:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:03.416 10:00:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:03.416 10:00:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:03.416 10:00:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:03.416 10:00:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:03.416 10:00:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.416 10:00:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.416 10:00:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.416 10:00:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:03.416 00:13:03.416 real 0m4.265s 00:13:03.416 user 0m12.293s 00:13:03.416 sys 0m0.998s 00:13:03.416 ************************************ 00:13:03.416 END TEST nvmf_abort 00:13:03.416 ************************************ 00:13:03.416 10:00:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:03.416 10:00:01 -- common/autotest_common.sh@10 -- # set +x 00:13:03.416 10:00:01 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:03.416 10:00:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:03.416 10:00:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:03.416 10:00:01 -- common/autotest_common.sh@10 -- # set +x 00:13:03.416 ************************************ 00:13:03.416 START TEST nvmf_ns_hotplug_stress 00:13:03.416 ************************************ 00:13:03.416 10:00:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:03.416 * Looking for test storage... 00:13:03.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:03.416 10:00:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:03.416 10:00:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:03.416 10:00:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:03.674 10:00:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:03.674 10:00:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:03.674 10:00:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:03.674 10:00:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:03.674 10:00:02 -- scripts/common.sh@335 -- # IFS=.-: 00:13:03.674 10:00:02 -- scripts/common.sh@335 -- # read -ra ver1 00:13:03.674 10:00:02 -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.674 10:00:02 -- scripts/common.sh@336 -- # read -ra ver2 00:13:03.674 10:00:02 -- scripts/common.sh@337 -- # local 'op=<' 00:13:03.674 10:00:02 -- scripts/common.sh@339 -- # ver1_l=2 00:13:03.674 10:00:02 -- scripts/common.sh@340 -- # ver2_l=1 00:13:03.674 10:00:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:03.674 10:00:02 -- scripts/common.sh@343 -- # case "$op" in 00:13:03.674 10:00:02 -- scripts/common.sh@344 -- # : 1 00:13:03.674 10:00:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:03.674 10:00:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.674 10:00:02 -- scripts/common.sh@364 -- # decimal 1 00:13:03.674 10:00:02 -- scripts/common.sh@352 -- # local d=1 00:13:03.674 10:00:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.674 10:00:02 -- scripts/common.sh@354 -- # echo 1 00:13:03.674 10:00:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:03.674 10:00:02 -- scripts/common.sh@365 -- # decimal 2 00:13:03.674 10:00:02 -- scripts/common.sh@352 -- # local d=2 00:13:03.674 10:00:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.674 10:00:02 -- scripts/common.sh@354 -- # echo 2 00:13:03.674 10:00:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:03.674 10:00:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:03.674 10:00:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:03.674 10:00:02 -- scripts/common.sh@367 -- # return 0 00:13:03.674 10:00:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.674 10:00:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:03.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.674 --rc genhtml_branch_coverage=1 00:13:03.674 --rc genhtml_function_coverage=1 00:13:03.674 --rc genhtml_legend=1 00:13:03.674 --rc geninfo_all_blocks=1 00:13:03.674 --rc geninfo_unexecuted_blocks=1 00:13:03.674 00:13:03.674 ' 00:13:03.674 10:00:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:03.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.674 --rc genhtml_branch_coverage=1 00:13:03.674 --rc genhtml_function_coverage=1 00:13:03.674 --rc genhtml_legend=1 00:13:03.674 --rc geninfo_all_blocks=1 00:13:03.674 --rc geninfo_unexecuted_blocks=1 00:13:03.674 00:13:03.674 ' 00:13:03.674 10:00:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:03.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.674 --rc genhtml_branch_coverage=1 00:13:03.674 --rc genhtml_function_coverage=1 00:13:03.674 --rc genhtml_legend=1 00:13:03.674 --rc geninfo_all_blocks=1 00:13:03.674 --rc geninfo_unexecuted_blocks=1 00:13:03.674 00:13:03.674 ' 00:13:03.674 10:00:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:03.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.674 --rc genhtml_branch_coverage=1 00:13:03.674 --rc genhtml_function_coverage=1 00:13:03.674 --rc genhtml_legend=1 00:13:03.674 --rc geninfo_all_blocks=1 00:13:03.674 --rc geninfo_unexecuted_blocks=1 00:13:03.674 00:13:03.674 ' 00:13:03.674 10:00:02 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:03.674 10:00:02 -- nvmf/common.sh@7 -- # uname -s 00:13:03.674 10:00:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.674 10:00:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.674 10:00:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.674 10:00:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.674 10:00:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.674 10:00:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.674 10:00:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.674 10:00:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.674 10:00:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.674 10:00:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.674 10:00:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:13:03.674 10:00:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:13:03.674 10:00:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.674 10:00:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.674 10:00:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:03.674 10:00:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:03.674 10:00:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.674 10:00:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.674 10:00:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.674 10:00:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.674 10:00:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.674 10:00:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.674 10:00:02 -- paths/export.sh@5 -- # export PATH 00:13:03.674 10:00:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.674 10:00:02 -- nvmf/common.sh@46 -- # : 0 00:13:03.674 10:00:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:03.674 10:00:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:03.674 10:00:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:03.674 10:00:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.674 10:00:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.674 10:00:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:03.674 10:00:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:03.674 10:00:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:03.674 10:00:02 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:03.674 10:00:02 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:03.674 10:00:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:03.674 10:00:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.674 10:00:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:03.674 10:00:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:03.674 10:00:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:03.674 10:00:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.674 10:00:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.674 10:00:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.674 10:00:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:03.674 10:00:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:03.674 10:00:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:03.674 10:00:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:03.674 10:00:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:03.674 10:00:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:03.674 10:00:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.675 10:00:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.675 10:00:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:03.675 10:00:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:03.675 10:00:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:03.675 10:00:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:03.675 10:00:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:03.675 10:00:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.675 10:00:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:03.675 10:00:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:03.675 10:00:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:03.675 10:00:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:03.675 10:00:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:03.675 10:00:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:03.675 Cannot find device "nvmf_tgt_br" 00:13:03.675 10:00:02 -- nvmf/common.sh@154 -- # true 00:13:03.675 10:00:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:03.675 Cannot find device "nvmf_tgt_br2" 00:13:03.675 10:00:02 -- nvmf/common.sh@155 -- # true 00:13:03.675 10:00:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:03.675 10:00:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:03.675 Cannot find device "nvmf_tgt_br" 00:13:03.675 10:00:02 -- nvmf/common.sh@157 -- # true 00:13:03.675 10:00:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:03.675 Cannot find device "nvmf_tgt_br2" 00:13:03.675 10:00:02 -- nvmf/common.sh@158 -- # true 00:13:03.675 10:00:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:03.933 10:00:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:03.933 10:00:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:03.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:03.933 10:00:02 -- nvmf/common.sh@161 -- # true 00:13:03.933 10:00:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:03.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:03.933 10:00:02 -- nvmf/common.sh@162 -- # true 00:13:03.933 10:00:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:03.933 10:00:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:03.933 10:00:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:03.933 10:00:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:03.933 10:00:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:03.933 10:00:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:03.933 10:00:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:03.933 10:00:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:03.933 10:00:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:03.933 10:00:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:03.933 10:00:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:03.933 10:00:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:03.933 10:00:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:03.933 10:00:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:03.933 10:00:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:03.933 10:00:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:03.933 10:00:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:03.933 10:00:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:03.933 10:00:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:03.933 10:00:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:03.933 10:00:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:03.933 10:00:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:03.933 10:00:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:03.933 10:00:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:03.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:13:03.933 00:13:03.933 --- 10.0.0.2 ping statistics --- 00:13:03.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.933 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:03.933 10:00:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:03.933 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:03.933 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:13:03.933 00:13:03.933 --- 10.0.0.3 ping statistics --- 00:13:03.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.933 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:03.933 10:00:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:03.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:03.933 00:13:03.933 --- 10.0.0.1 ping statistics --- 00:13:03.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.933 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:03.933 10:00:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.933 10:00:02 -- nvmf/common.sh@421 -- # return 0 00:13:03.933 10:00:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:03.933 10:00:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.933 10:00:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:03.933 10:00:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:03.933 10:00:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.933 10:00:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:03.933 10:00:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:03.933 10:00:02 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:03.933 10:00:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:03.933 10:00:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:03.933 10:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:03.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.933 10:00:02 -- nvmf/common.sh@469 -- # nvmfpid=79345 00:13:03.933 10:00:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:03.933 10:00:02 -- nvmf/common.sh@470 -- # waitforlisten 79345 00:13:03.933 10:00:02 -- common/autotest_common.sh@829 -- # '[' -z 79345 ']' 00:13:03.933 10:00:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.933 10:00:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.933 10:00:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.933 10:00:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.933 10:00:02 -- common/autotest_common.sh@10 -- # set +x 00:13:04.192 [2024-12-16 10:00:02.591247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:04.192 [2024-12-16 10:00:02.591336] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.192 [2024-12-16 10:00:02.734538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:04.192 [2024-12-16 10:00:02.799474] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:04.192 [2024-12-16 10:00:02.799602] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.192 [2024-12-16 10:00:02.799614] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.192 [2024-12-16 10:00:02.799621] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.192 [2024-12-16 10:00:02.799731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.192 [2024-12-16 10:00:02.800797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.192 [2024-12-16 10:00:02.800826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.126 10:00:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:05.126 10:00:03 -- common/autotest_common.sh@862 -- # return 0 00:13:05.126 10:00:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:05.126 10:00:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:05.126 10:00:03 -- common/autotest_common.sh@10 -- # set +x 00:13:05.126 10:00:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.126 10:00:03 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:05.126 10:00:03 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:05.385 [2024-12-16 10:00:03.819279] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.385 10:00:03 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:05.643 10:00:04 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.643 [2024-12-16 10:00:04.241351] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.643 10:00:04 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:05.901 10:00:04 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:06.159 Malloc0 00:13:06.159 10:00:04 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:06.417 Delay0 00:13:06.417 10:00:04 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.675 10:00:05 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:06.937 NULL1 00:13:06.938 10:00:05 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:07.196 10:00:05 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79471 00:13:07.196 10:00:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:07.196 10:00:05 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:07.196 10:00:05 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.572 Read completed with error (sct=0, sc=11) 00:13:08.572 10:00:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.830 10:00:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:08.830 10:00:07 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:08.830 true 00:13:08.830 10:00:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:08.830 10:00:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.765 10:00:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.024 10:00:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:10.024 10:00:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:10.282 true 00:13:10.282 10:00:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:10.282 10:00:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.540 10:00:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.799 10:00:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:10.799 10:00:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:10.799 true 00:13:10.799 10:00:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:10.799 10:00:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.732 10:00:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.989 10:00:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:11.989 10:00:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:12.248 true 00:13:12.248 10:00:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:12.248 10:00:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.506 10:00:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.765 10:00:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:12.765 10:00:11 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:12.765 true 00:13:12.765 10:00:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:12.765 10:00:11 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.699 10:00:12 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.960 10:00:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:13.960 10:00:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:14.231 true 00:13:14.231 10:00:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:14.231 10:00:12 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.506 10:00:12 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.765 10:00:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:14.765 10:00:13 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:15.023 true 00:13:15.023 10:00:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:15.023 10:00:13 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.959 10:00:14 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.959 10:00:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:15.959 10:00:14 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:16.218 true 00:13:16.218 10:00:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:16.218 10:00:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.476 10:00:14 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.734 10:00:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:16.734 10:00:15 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:16.734 true 00:13:16.734 10:00:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:16.734 10:00:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.668 10:00:16 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.926 10:00:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:17.926 10:00:16 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:18.185 true 00:13:18.185 10:00:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:18.185 10:00:16 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.444 10:00:16 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.703 10:00:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:18.703 10:00:17 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:18.961 true 00:13:18.961 10:00:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:18.961 10:00:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.897 10:00:18 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.897 10:00:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:19.897 10:00:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:20.156 true 00:13:20.156 10:00:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:20.156 10:00:18 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.415 10:00:18 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.674 10:00:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:20.674 10:00:19 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:20.933 true 00:13:20.933 10:00:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:20.933 10:00:19 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.869 10:00:20 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.128 10:00:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:22.128 10:00:20 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:22.128 true 00:13:22.128 10:00:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:22.128 10:00:20 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.387 10:00:20 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.646 10:00:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:22.646 10:00:21 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:22.904 true 00:13:22.904 10:00:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:22.904 10:00:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.841 10:00:22 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.100 10:00:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:24.100 10:00:22 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:24.359 true 00:13:24.359 10:00:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:24.359 10:00:22 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.618 10:00:23 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.876 10:00:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:24.876 10:00:23 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:25.135 true 00:13:25.135 10:00:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:25.135 10:00:23 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.135 10:00:23 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.394 10:00:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:25.394 10:00:23 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:25.653 true 00:13:25.653 10:00:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:25.653 10:00:24 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.029 10:00:25 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.029 10:00:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:27.029 10:00:25 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:27.288 true 00:13:27.288 10:00:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:27.288 10:00:25 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.225 10:00:26 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.225 10:00:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:28.225 10:00:26 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:28.484 true 00:13:28.484 10:00:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:28.484 10:00:27 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.743 10:00:27 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.002 10:00:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:29.002 10:00:27 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:29.260 true 00:13:29.260 10:00:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:29.260 10:00:27 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.197 10:00:28 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.197 10:00:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:30.197 10:00:28 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:30.455 true 00:13:30.455 10:00:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:30.455 10:00:28 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.714 10:00:29 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.972 10:00:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:30.972 10:00:29 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:31.231 true 00:13:31.231 10:00:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:31.231 10:00:29 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.167 10:00:30 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.167 10:00:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:32.167 10:00:30 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:32.426 true 00:13:32.426 10:00:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:32.426 10:00:31 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.684 10:00:31 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.943 10:00:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:32.943 10:00:31 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:33.201 true 00:13:33.201 10:00:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:33.201 10:00:31 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.138 10:00:32 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.396 10:00:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:34.396 10:00:32 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:34.655 true 00:13:34.655 10:00:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:34.655 10:00:33 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.913 10:00:33 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.172 10:00:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:35.172 10:00:33 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:35.430 true 00:13:35.430 10:00:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:35.430 10:00:33 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.430 10:00:34 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.997 10:00:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:35.997 10:00:34 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:35.997 true 00:13:35.997 10:00:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:35.997 10:00:34 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.409 10:00:35 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.409 10:00:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:37.409 10:00:35 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:37.409 Initializing NVMe Controllers 00:13:37.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:37.409 Controller IO queue size 128, less than required. 00:13:37.409 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:37.409 Controller IO queue size 128, less than required. 00:13:37.409 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:37.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:37.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:37.409 Initialization complete. Launching workers. 00:13:37.409 ======================================================== 00:13:37.409 Latency(us) 00:13:37.409 Device Information : IOPS MiB/s Average min max 00:13:37.409 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 597.73 0.29 116618.06 3094.06 1107203.20 00:13:37.409 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13798.17 6.74 9276.38 2664.40 580964.47 00:13:37.409 ======================================================== 00:13:37.409 Total : 14395.90 7.03 13733.31 2664.40 1107203.20 00:13:37.409 00:13:37.668 true 00:13:37.668 10:00:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79471 00:13:37.668 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79471) - No such process 00:13:37.668 10:00:36 -- target/ns_hotplug_stress.sh@53 -- # wait 79471 00:13:37.668 10:00:36 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.926 10:00:36 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.185 10:00:36 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:38.185 10:00:36 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:38.185 10:00:36 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:38.185 10:00:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.185 10:00:36 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:38.443 null0 00:13:38.443 10:00:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:38.443 10:00:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.443 10:00:36 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:38.702 null1 00:13:38.702 10:00:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:38.702 10:00:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.702 10:00:37 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:38.702 null2 00:13:38.702 10:00:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:38.702 10:00:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.702 10:00:37 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:38.960 null3 00:13:38.960 10:00:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:38.960 10:00:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.960 10:00:37 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:39.219 null4 00:13:39.219 10:00:37 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.219 10:00:37 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.219 10:00:37 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:39.477 null5 00:13:39.477 10:00:38 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.477 10:00:38 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.477 10:00:38 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:39.736 null6 00:13:39.736 10:00:38 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.736 10:00:38 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.736 10:00:38 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:39.995 null7 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.995 10:00:38 -- target/ns_hotplug_stress.sh@66 -- # wait 80497 80498 80501 80503 80504 80506 80508 80510 00:13:40.254 10:00:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.254 10:00:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.254 10:00:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:40.254 10:00:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:40.254 10:00:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:40.254 10:00:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:40.512 10:00:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:40.512 10:00:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:40.512 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.512 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.512 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:40.512 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.512 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.512 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:40.512 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.512 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.512 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:40.512 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.512 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.512 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:40.512 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.512 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.512 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:40.771 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.771 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.771 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:40.771 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.771 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.771 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:40.771 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.771 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.771 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:40.771 10:00:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:40.771 10:00:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.771 10:00:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.771 10:00:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:40.771 10:00:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:41.029 10:00:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:41.029 10:00:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.030 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:41.288 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.288 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.288 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:41.288 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.288 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.288 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:41.288 10:00:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:41.288 10:00:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.288 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.288 10:00:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.288 10:00:39 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:41.288 10:00:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.288 10:00:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.547 10:00:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:41.547 10:00:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:41.547 10:00:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:41.547 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.547 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.547 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.547 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.547 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.547 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.547 10:00:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.547 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.547 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.547 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:41.806 10:00:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.065 10:00:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.065 10:00:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.065 10:00:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.065 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.065 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.065 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:42.065 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.065 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.065 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:42.065 10:00:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:42.065 10:00:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.065 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.065 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.065 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:42.324 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.324 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.324 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:42.324 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.324 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.324 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:42.324 10:00:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:42.324 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.324 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.324 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:42.324 10:00:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.324 10:00:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.324 10:00:40 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.583 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.583 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.583 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.583 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.583 10:00:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.583 10:00:40 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:42.583 10:00:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.583 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.583 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.583 10:00:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:42.583 10:00:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.583 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.583 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.583 10:00:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:42.583 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.583 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.583 10:00:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:42.842 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.842 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.842 10:00:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:42.842 10:00:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.842 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.842 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.842 10:00:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:42.842 10:00:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:42.842 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.842 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.842 10:00:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:42.842 10:00:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.842 10:00:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:42.842 10:00:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.100 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.100 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.100 10:00:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:43.100 10:00:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:43.100 10:00:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:43.100 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.100 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.100 10:00:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:43.100 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.100 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.100 10:00:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:43.100 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.100 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.100 10:00:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:43.100 10:00:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.359 10:00:41 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:43.618 10:00:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:43.876 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.876 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.876 10:00:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:43.876 10:00:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:43.876 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.876 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.876 10:00:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:43.876 10:00:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:43.876 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.876 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.876 10:00:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:43.877 10:00:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:43.877 10:00:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.135 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.135 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.135 10:00:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:44.135 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.135 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.135 10:00:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:44.135 10:00:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:44.135 10:00:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.135 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.135 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.135 10:00:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:44.135 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.135 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.135 10:00:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:44.135 10:00:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.394 10:00:42 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.653 10:00:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:44.911 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.911 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.911 10:00:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:44.911 10:00:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:44.911 10:00:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:44.911 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.911 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.911 10:00:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:44.911 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.911 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.911 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.911 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.911 10:00:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:44.911 10:00:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:44.911 10:00:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:45.170 10:00:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:45.170 10:00:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:45.170 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.170 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.170 10:00:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:45.170 10:00:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:45.170 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.170 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.170 10:00:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:45.170 10:00:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.170 10:00:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:45.429 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.429 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.429 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.429 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.429 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.429 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.429 10:00:43 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:45.429 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.429 10:00:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:45.429 10:00:43 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.429 10:00:43 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:45.429 10:00:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.429 10:00:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.429 10:00:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.429 10:00:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.687 10:00:44 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:45.687 10:00:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.687 10:00:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.687 10:00:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.687 10:00:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.946 10:00:44 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.946 10:00:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.946 10:00:44 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:45.946 10:00:44 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:45.946 10:00:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:45.946 10:00:44 -- nvmf/common.sh@116 -- # sync 00:13:45.946 10:00:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:45.946 10:00:44 -- nvmf/common.sh@119 -- # set +e 00:13:45.946 10:00:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:45.946 10:00:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:45.946 rmmod nvme_tcp 00:13:45.946 rmmod nvme_fabrics 00:13:45.946 rmmod nvme_keyring 00:13:45.946 10:00:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:45.946 10:00:44 -- nvmf/common.sh@123 -- # set -e 00:13:45.946 10:00:44 -- nvmf/common.sh@124 -- # return 0 00:13:45.946 10:00:44 -- nvmf/common.sh@477 -- # '[' -n 79345 ']' 00:13:45.946 10:00:44 -- nvmf/common.sh@478 -- # killprocess 79345 00:13:45.946 10:00:44 -- common/autotest_common.sh@936 -- # '[' -z 79345 ']' 00:13:45.946 10:00:44 -- common/autotest_common.sh@940 -- # kill -0 79345 00:13:45.946 10:00:44 -- common/autotest_common.sh@941 -- # uname 00:13:45.946 10:00:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:45.946 10:00:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79345 00:13:45.946 10:00:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:45.946 10:00:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:45.946 10:00:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79345' 00:13:45.946 killing process with pid 79345 00:13:45.946 10:00:44 -- common/autotest_common.sh@955 -- # kill 79345 00:13:45.946 10:00:44 -- common/autotest_common.sh@960 -- # wait 79345 00:13:46.205 10:00:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:46.205 10:00:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:46.205 10:00:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:46.205 10:00:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.205 10:00:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:46.205 10:00:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.205 10:00:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.205 10:00:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.205 10:00:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:46.205 ************************************ 00:13:46.205 END TEST nvmf_ns_hotplug_stress 00:13:46.205 ************************************ 00:13:46.205 00:13:46.205 real 0m42.713s 00:13:46.205 user 3m25.176s 00:13:46.205 sys 0m12.052s 00:13:46.205 10:00:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:46.205 10:00:44 -- common/autotest_common.sh@10 -- # set +x 00:13:46.205 10:00:44 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:46.205 10:00:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:46.205 10:00:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:46.205 10:00:44 -- common/autotest_common.sh@10 -- # set +x 00:13:46.205 ************************************ 00:13:46.205 START TEST nvmf_connect_stress 00:13:46.205 ************************************ 00:13:46.205 10:00:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:46.205 * Looking for test storage... 00:13:46.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:46.205 10:00:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:46.205 10:00:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:46.205 10:00:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:46.464 10:00:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:46.464 10:00:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:46.464 10:00:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:46.464 10:00:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:46.464 10:00:44 -- scripts/common.sh@335 -- # IFS=.-: 00:13:46.464 10:00:44 -- scripts/common.sh@335 -- # read -ra ver1 00:13:46.464 10:00:44 -- scripts/common.sh@336 -- # IFS=.-: 00:13:46.464 10:00:44 -- scripts/common.sh@336 -- # read -ra ver2 00:13:46.464 10:00:44 -- scripts/common.sh@337 -- # local 'op=<' 00:13:46.464 10:00:44 -- scripts/common.sh@339 -- # ver1_l=2 00:13:46.464 10:00:44 -- scripts/common.sh@340 -- # ver2_l=1 00:13:46.465 10:00:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:46.465 10:00:44 -- scripts/common.sh@343 -- # case "$op" in 00:13:46.465 10:00:44 -- scripts/common.sh@344 -- # : 1 00:13:46.465 10:00:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:46.465 10:00:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:46.465 10:00:44 -- scripts/common.sh@364 -- # decimal 1 00:13:46.465 10:00:44 -- scripts/common.sh@352 -- # local d=1 00:13:46.465 10:00:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:46.465 10:00:44 -- scripts/common.sh@354 -- # echo 1 00:13:46.465 10:00:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:46.465 10:00:44 -- scripts/common.sh@365 -- # decimal 2 00:13:46.465 10:00:44 -- scripts/common.sh@352 -- # local d=2 00:13:46.465 10:00:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:46.465 10:00:44 -- scripts/common.sh@354 -- # echo 2 00:13:46.465 10:00:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:46.465 10:00:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:46.465 10:00:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:46.465 10:00:44 -- scripts/common.sh@367 -- # return 0 00:13:46.465 10:00:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:46.465 10:00:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:46.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.465 --rc genhtml_branch_coverage=1 00:13:46.465 --rc genhtml_function_coverage=1 00:13:46.465 --rc genhtml_legend=1 00:13:46.465 --rc geninfo_all_blocks=1 00:13:46.465 --rc geninfo_unexecuted_blocks=1 00:13:46.465 00:13:46.465 ' 00:13:46.465 10:00:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:46.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.465 --rc genhtml_branch_coverage=1 00:13:46.465 --rc genhtml_function_coverage=1 00:13:46.465 --rc genhtml_legend=1 00:13:46.465 --rc geninfo_all_blocks=1 00:13:46.465 --rc geninfo_unexecuted_blocks=1 00:13:46.465 00:13:46.465 ' 00:13:46.465 10:00:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:46.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.465 --rc genhtml_branch_coverage=1 00:13:46.465 --rc genhtml_function_coverage=1 00:13:46.465 --rc genhtml_legend=1 00:13:46.465 --rc geninfo_all_blocks=1 00:13:46.465 --rc geninfo_unexecuted_blocks=1 00:13:46.465 00:13:46.465 ' 00:13:46.465 10:00:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:46.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.465 --rc genhtml_branch_coverage=1 00:13:46.465 --rc genhtml_function_coverage=1 00:13:46.465 --rc genhtml_legend=1 00:13:46.465 --rc geninfo_all_blocks=1 00:13:46.465 --rc geninfo_unexecuted_blocks=1 00:13:46.465 00:13:46.465 ' 00:13:46.465 10:00:44 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:46.465 10:00:44 -- nvmf/common.sh@7 -- # uname -s 00:13:46.465 10:00:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.465 10:00:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.465 10:00:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.465 10:00:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.465 10:00:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.465 10:00:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.465 10:00:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.465 10:00:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.465 10:00:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.465 10:00:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.465 10:00:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:13:46.465 10:00:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:13:46.465 10:00:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.465 10:00:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.465 10:00:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:46.465 10:00:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:46.465 10:00:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.465 10:00:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.465 10:00:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.465 10:00:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.465 10:00:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.465 10:00:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.465 10:00:44 -- paths/export.sh@5 -- # export PATH 00:13:46.465 10:00:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.465 10:00:44 -- nvmf/common.sh@46 -- # : 0 00:13:46.465 10:00:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:46.465 10:00:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:46.465 10:00:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:46.465 10:00:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.465 10:00:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.465 10:00:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:46.465 10:00:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:46.465 10:00:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:46.465 10:00:44 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:46.465 10:00:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:46.465 10:00:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.465 10:00:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:46.465 10:00:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:46.465 10:00:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:46.465 10:00:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.465 10:00:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.465 10:00:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.465 10:00:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:46.465 10:00:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:46.465 10:00:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:46.465 10:00:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:46.465 10:00:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:46.465 10:00:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:46.465 10:00:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.465 10:00:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.465 10:00:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:46.465 10:00:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:46.465 10:00:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:46.465 10:00:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:46.465 10:00:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:46.465 10:00:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.465 10:00:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:46.465 10:00:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:46.465 10:00:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:46.465 10:00:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:46.465 10:00:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:46.465 10:00:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:46.465 Cannot find device "nvmf_tgt_br" 00:13:46.465 10:00:44 -- nvmf/common.sh@154 -- # true 00:13:46.465 10:00:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:46.465 Cannot find device "nvmf_tgt_br2" 00:13:46.465 10:00:44 -- nvmf/common.sh@155 -- # true 00:13:46.465 10:00:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:46.465 10:00:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:46.465 Cannot find device "nvmf_tgt_br" 00:13:46.465 10:00:44 -- nvmf/common.sh@157 -- # true 00:13:46.465 10:00:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:46.465 Cannot find device "nvmf_tgt_br2" 00:13:46.465 10:00:44 -- nvmf/common.sh@158 -- # true 00:13:46.465 10:00:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:46.465 10:00:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:46.465 10:00:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:46.465 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.465 10:00:45 -- nvmf/common.sh@161 -- # true 00:13:46.465 10:00:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:46.465 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.465 10:00:45 -- nvmf/common.sh@162 -- # true 00:13:46.465 10:00:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:46.465 10:00:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:46.465 10:00:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:46.465 10:00:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:46.465 10:00:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:46.465 10:00:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:46.465 10:00:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:46.465 10:00:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:46.465 10:00:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:46.466 10:00:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:46.724 10:00:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:46.724 10:00:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:46.724 10:00:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:46.724 10:00:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:46.724 10:00:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:46.724 10:00:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:46.724 10:00:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:46.724 10:00:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:46.724 10:00:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:46.724 10:00:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:46.724 10:00:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:46.724 10:00:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:46.724 10:00:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:46.724 10:00:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:46.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:13:46.724 00:13:46.724 --- 10.0.0.2 ping statistics --- 00:13:46.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.724 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:13:46.724 10:00:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:46.724 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:46.724 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:13:46.724 00:13:46.724 --- 10.0.0.3 ping statistics --- 00:13:46.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.724 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:46.724 10:00:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:46.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:46.724 00:13:46.724 --- 10.0.0.1 ping statistics --- 00:13:46.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.724 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:46.724 10:00:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.724 10:00:45 -- nvmf/common.sh@421 -- # return 0 00:13:46.724 10:00:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:46.724 10:00:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.724 10:00:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:46.724 10:00:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:46.724 10:00:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.724 10:00:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:46.724 10:00:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:46.724 10:00:45 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:46.724 10:00:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:46.724 10:00:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:46.724 10:00:45 -- common/autotest_common.sh@10 -- # set +x 00:13:46.724 10:00:45 -- nvmf/common.sh@469 -- # nvmfpid=81831 00:13:46.724 10:00:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:46.724 10:00:45 -- nvmf/common.sh@470 -- # waitforlisten 81831 00:13:46.724 10:00:45 -- common/autotest_common.sh@829 -- # '[' -z 81831 ']' 00:13:46.724 10:00:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.724 10:00:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.724 10:00:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.724 10:00:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.724 10:00:45 -- common/autotest_common.sh@10 -- # set +x 00:13:46.724 [2024-12-16 10:00:45.268846] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:46.724 [2024-12-16 10:00:45.268934] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.983 [2024-12-16 10:00:45.411153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:46.983 [2024-12-16 10:00:45.475381] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:46.983 [2024-12-16 10:00:45.475560] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.983 [2024-12-16 10:00:45.475575] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.983 [2024-12-16 10:00:45.475587] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.983 [2024-12-16 10:00:45.475865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.983 [2024-12-16 10:00:45.476204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.983 [2024-12-16 10:00:45.476240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.918 10:00:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.918 10:00:46 -- common/autotest_common.sh@862 -- # return 0 00:13:47.918 10:00:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:47.918 10:00:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.918 10:00:46 -- common/autotest_common.sh@10 -- # set +x 00:13:47.918 10:00:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.918 10:00:46 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.918 10:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.918 10:00:46 -- common/autotest_common.sh@10 -- # set +x 00:13:47.918 [2024-12-16 10:00:46.343014] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.918 10:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.918 10:00:46 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:47.918 10:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.918 10:00:46 -- common/autotest_common.sh@10 -- # set +x 00:13:47.918 10:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.918 10:00:46 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.918 10:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.918 10:00:46 -- common/autotest_common.sh@10 -- # set +x 00:13:47.918 [2024-12-16 10:00:46.364757] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.918 10:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.918 10:00:46 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:47.918 10:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.918 10:00:46 -- common/autotest_common.sh@10 -- # set +x 00:13:47.918 NULL1 00:13:47.918 10:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.918 10:00:46 -- target/connect_stress.sh@21 -- # PERF_PID=81883 00:13:47.918 10:00:46 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:47.918 10:00:46 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:47.918 10:00:46 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.918 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.918 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.919 10:00:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.919 10:00:46 -- target/connect_stress.sh@28 -- # cat 00:13:47.919 10:00:46 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:47.919 10:00:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.919 10:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.919 10:00:46 -- common/autotest_common.sh@10 -- # set +x 00:13:48.178 10:00:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.178 10:00:46 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:48.178 10:00:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.178 10:00:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.178 10:00:46 -- common/autotest_common.sh@10 -- # set +x 00:13:48.745 10:00:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.745 10:00:47 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:48.745 10:00:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.745 10:00:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.745 10:00:47 -- common/autotest_common.sh@10 -- # set +x 00:13:49.004 10:00:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.004 10:00:47 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:49.004 10:00:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.004 10:00:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.004 10:00:47 -- common/autotest_common.sh@10 -- # set +x 00:13:49.262 10:00:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.262 10:00:47 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:49.262 10:00:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.262 10:00:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.262 10:00:47 -- common/autotest_common.sh@10 -- # set +x 00:13:49.521 10:00:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.521 10:00:48 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:49.521 10:00:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.521 10:00:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.521 10:00:48 -- common/autotest_common.sh@10 -- # set +x 00:13:49.779 10:00:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.779 10:00:48 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:49.779 10:00:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.779 10:00:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.779 10:00:48 -- common/autotest_common.sh@10 -- # set +x 00:13:50.347 10:00:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.347 10:00:48 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:50.347 10:00:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.347 10:00:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.347 10:00:48 -- common/autotest_common.sh@10 -- # set +x 00:13:50.605 10:00:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.605 10:00:49 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:50.605 10:00:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.605 10:00:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.605 10:00:49 -- common/autotest_common.sh@10 -- # set +x 00:13:50.865 10:00:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.865 10:00:49 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:50.865 10:00:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.865 10:00:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.865 10:00:49 -- common/autotest_common.sh@10 -- # set +x 00:13:51.123 10:00:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.123 10:00:49 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:51.123 10:00:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.123 10:00:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.123 10:00:49 -- common/autotest_common.sh@10 -- # set +x 00:13:51.382 10:00:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.382 10:00:49 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:51.382 10:00:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.382 10:00:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.382 10:00:49 -- common/autotest_common.sh@10 -- # set +x 00:13:51.949 10:00:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.949 10:00:50 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:51.949 10:00:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.949 10:00:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.949 10:00:50 -- common/autotest_common.sh@10 -- # set +x 00:13:52.207 10:00:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.207 10:00:50 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:52.207 10:00:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.207 10:00:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.207 10:00:50 -- common/autotest_common.sh@10 -- # set +x 00:13:52.465 10:00:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.465 10:00:50 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:52.465 10:00:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.465 10:00:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.465 10:00:50 -- common/autotest_common.sh@10 -- # set +x 00:13:52.723 10:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.723 10:00:51 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:52.723 10:00:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.723 10:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.723 10:00:51 -- common/autotest_common.sh@10 -- # set +x 00:13:53.291 10:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.291 10:00:51 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:53.291 10:00:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.291 10:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.291 10:00:51 -- common/autotest_common.sh@10 -- # set +x 00:13:53.550 10:00:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.550 10:00:51 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:53.550 10:00:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.550 10:00:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.550 10:00:51 -- common/autotest_common.sh@10 -- # set +x 00:13:53.808 10:00:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.808 10:00:52 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:53.808 10:00:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.808 10:00:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.808 10:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:54.067 10:00:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.067 10:00:52 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:54.067 10:00:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.067 10:00:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.067 10:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:54.325 10:00:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.325 10:00:52 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:54.325 10:00:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.325 10:00:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.325 10:00:52 -- common/autotest_common.sh@10 -- # set +x 00:13:54.892 10:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.892 10:00:53 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:54.892 10:00:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.892 10:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.892 10:00:53 -- common/autotest_common.sh@10 -- # set +x 00:13:55.151 10:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.151 10:00:53 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:55.151 10:00:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.151 10:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.151 10:00:53 -- common/autotest_common.sh@10 -- # set +x 00:13:55.409 10:00:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.409 10:00:53 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:55.409 10:00:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.409 10:00:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.410 10:00:53 -- common/autotest_common.sh@10 -- # set +x 00:13:55.668 10:00:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.668 10:00:54 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:55.668 10:00:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.668 10:00:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.668 10:00:54 -- common/autotest_common.sh@10 -- # set +x 00:13:55.926 10:00:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.926 10:00:54 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:55.926 10:00:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.926 10:00:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.926 10:00:54 -- common/autotest_common.sh@10 -- # set +x 00:13:56.494 10:00:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.494 10:00:54 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:56.494 10:00:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.494 10:00:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.494 10:00:54 -- common/autotest_common.sh@10 -- # set +x 00:13:56.752 10:00:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.752 10:00:55 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:56.752 10:00:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.752 10:00:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.752 10:00:55 -- common/autotest_common.sh@10 -- # set +x 00:13:57.025 10:00:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.025 10:00:55 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:57.025 10:00:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.025 10:00:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.025 10:00:55 -- common/autotest_common.sh@10 -- # set +x 00:13:57.320 10:00:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.320 10:00:55 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:57.320 10:00:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.320 10:00:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.320 10:00:55 -- common/autotest_common.sh@10 -- # set +x 00:13:57.593 10:00:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.593 10:00:56 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:57.593 10:00:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.593 10:00:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.593 10:00:56 -- common/autotest_common.sh@10 -- # set +x 00:13:57.851 10:00:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.851 10:00:56 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:57.851 10:00:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.851 10:00:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.851 10:00:56 -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:58.368 10:00:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.368 10:00:56 -- target/connect_stress.sh@34 -- # kill -0 81883 00:13:58.368 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81883) - No such process 00:13:58.368 10:00:56 -- target/connect_stress.sh@38 -- # wait 81883 00:13:58.368 10:00:56 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:58.368 10:00:56 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:58.368 10:00:56 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:58.368 10:00:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:58.368 10:00:56 -- nvmf/common.sh@116 -- # sync 00:13:58.368 10:00:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:58.368 10:00:56 -- nvmf/common.sh@119 -- # set +e 00:13:58.368 10:00:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:58.368 10:00:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:58.368 rmmod nvme_tcp 00:13:58.368 rmmod nvme_fabrics 00:13:58.368 rmmod nvme_keyring 00:13:58.368 10:00:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:58.368 10:00:56 -- nvmf/common.sh@123 -- # set -e 00:13:58.368 10:00:56 -- nvmf/common.sh@124 -- # return 0 00:13:58.368 10:00:56 -- nvmf/common.sh@477 -- # '[' -n 81831 ']' 00:13:58.368 10:00:56 -- nvmf/common.sh@478 -- # killprocess 81831 00:13:58.368 10:00:56 -- common/autotest_common.sh@936 -- # '[' -z 81831 ']' 00:13:58.368 10:00:56 -- common/autotest_common.sh@940 -- # kill -0 81831 00:13:58.368 10:00:56 -- common/autotest_common.sh@941 -- # uname 00:13:58.368 10:00:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:58.368 10:00:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81831 00:13:58.368 killing process with pid 81831 00:13:58.368 10:00:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:58.368 10:00:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:58.368 10:00:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81831' 00:13:58.368 10:00:56 -- common/autotest_common.sh@955 -- # kill 81831 00:13:58.368 10:00:56 -- common/autotest_common.sh@960 -- # wait 81831 00:13:58.627 10:00:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:58.627 10:00:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:58.627 10:00:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:58.627 10:00:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.627 10:00:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:58.627 10:00:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.627 10:00:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.627 10:00:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.627 10:00:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:58.627 00:13:58.627 real 0m12.424s 00:13:58.627 user 0m41.600s 00:13:58.627 sys 0m3.241s 00:13:58.627 10:00:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:58.627 10:00:57 -- common/autotest_common.sh@10 -- # set +x 00:13:58.627 ************************************ 00:13:58.627 END TEST nvmf_connect_stress 00:13:58.627 ************************************ 00:13:58.627 10:00:57 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:58.627 10:00:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:58.627 10:00:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:58.627 10:00:57 -- common/autotest_common.sh@10 -- # set +x 00:13:58.627 ************************************ 00:13:58.627 START TEST nvmf_fused_ordering 00:13:58.627 ************************************ 00:13:58.627 10:00:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:58.886 * Looking for test storage... 00:13:58.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:58.886 10:00:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:58.886 10:00:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:58.886 10:00:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:58.886 10:00:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:58.886 10:00:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:58.886 10:00:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:58.886 10:00:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:58.886 10:00:57 -- scripts/common.sh@335 -- # IFS=.-: 00:13:58.886 10:00:57 -- scripts/common.sh@335 -- # read -ra ver1 00:13:58.886 10:00:57 -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.886 10:00:57 -- scripts/common.sh@336 -- # read -ra ver2 00:13:58.886 10:00:57 -- scripts/common.sh@337 -- # local 'op=<' 00:13:58.886 10:00:57 -- scripts/common.sh@339 -- # ver1_l=2 00:13:58.886 10:00:57 -- scripts/common.sh@340 -- # ver2_l=1 00:13:58.886 10:00:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:58.886 10:00:57 -- scripts/common.sh@343 -- # case "$op" in 00:13:58.886 10:00:57 -- scripts/common.sh@344 -- # : 1 00:13:58.886 10:00:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:58.886 10:00:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.886 10:00:57 -- scripts/common.sh@364 -- # decimal 1 00:13:58.886 10:00:57 -- scripts/common.sh@352 -- # local d=1 00:13:58.886 10:00:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.886 10:00:57 -- scripts/common.sh@354 -- # echo 1 00:13:58.886 10:00:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:58.886 10:00:57 -- scripts/common.sh@365 -- # decimal 2 00:13:58.886 10:00:57 -- scripts/common.sh@352 -- # local d=2 00:13:58.886 10:00:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.886 10:00:57 -- scripts/common.sh@354 -- # echo 2 00:13:58.886 10:00:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:58.886 10:00:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:58.886 10:00:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:58.886 10:00:57 -- scripts/common.sh@367 -- # return 0 00:13:58.886 10:00:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.886 10:00:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:58.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.886 --rc genhtml_branch_coverage=1 00:13:58.886 --rc genhtml_function_coverage=1 00:13:58.886 --rc genhtml_legend=1 00:13:58.886 --rc geninfo_all_blocks=1 00:13:58.887 --rc geninfo_unexecuted_blocks=1 00:13:58.887 00:13:58.887 ' 00:13:58.887 10:00:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:58.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.887 --rc genhtml_branch_coverage=1 00:13:58.887 --rc genhtml_function_coverage=1 00:13:58.887 --rc genhtml_legend=1 00:13:58.887 --rc geninfo_all_blocks=1 00:13:58.887 --rc geninfo_unexecuted_blocks=1 00:13:58.887 00:13:58.887 ' 00:13:58.887 10:00:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:58.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.887 --rc genhtml_branch_coverage=1 00:13:58.887 --rc genhtml_function_coverage=1 00:13:58.887 --rc genhtml_legend=1 00:13:58.887 --rc geninfo_all_blocks=1 00:13:58.887 --rc geninfo_unexecuted_blocks=1 00:13:58.887 00:13:58.887 ' 00:13:58.887 10:00:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:58.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.887 --rc genhtml_branch_coverage=1 00:13:58.887 --rc genhtml_function_coverage=1 00:13:58.887 --rc genhtml_legend=1 00:13:58.887 --rc geninfo_all_blocks=1 00:13:58.887 --rc geninfo_unexecuted_blocks=1 00:13:58.887 00:13:58.887 ' 00:13:58.887 10:00:57 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:58.887 10:00:57 -- nvmf/common.sh@7 -- # uname -s 00:13:58.887 10:00:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.887 10:00:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.887 10:00:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.887 10:00:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.887 10:00:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.887 10:00:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.887 10:00:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.887 10:00:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.887 10:00:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.887 10:00:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.887 10:00:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:13:58.887 10:00:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:13:58.887 10:00:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.887 10:00:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.887 10:00:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:58.887 10:00:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:58.887 10:00:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.887 10:00:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.887 10:00:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.887 10:00:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.887 10:00:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.887 10:00:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.887 10:00:57 -- paths/export.sh@5 -- # export PATH 00:13:58.887 10:00:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.887 10:00:57 -- nvmf/common.sh@46 -- # : 0 00:13:58.887 10:00:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:58.887 10:00:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:58.887 10:00:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:58.887 10:00:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.887 10:00:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.887 10:00:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:58.887 10:00:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:58.887 10:00:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:58.887 10:00:57 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:58.887 10:00:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:58.887 10:00:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.887 10:00:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:58.887 10:00:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:58.887 10:00:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:58.887 10:00:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.887 10:00:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.887 10:00:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.887 10:00:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:58.887 10:00:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:58.887 10:00:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:58.887 10:00:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:58.887 10:00:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:58.887 10:00:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:58.887 10:00:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.887 10:00:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.887 10:00:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:58.887 10:00:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:58.887 10:00:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:58.887 10:00:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:58.887 10:00:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:58.887 10:00:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.887 10:00:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:58.887 10:00:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:58.887 10:00:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:58.887 10:00:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:58.887 10:00:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:58.887 10:00:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:58.887 Cannot find device "nvmf_tgt_br" 00:13:58.887 10:00:57 -- nvmf/common.sh@154 -- # true 00:13:58.887 10:00:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:58.887 Cannot find device "nvmf_tgt_br2" 00:13:58.887 10:00:57 -- nvmf/common.sh@155 -- # true 00:13:58.887 10:00:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:58.887 10:00:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:58.887 Cannot find device "nvmf_tgt_br" 00:13:58.887 10:00:57 -- nvmf/common.sh@157 -- # true 00:13:58.887 10:00:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:58.887 Cannot find device "nvmf_tgt_br2" 00:13:58.887 10:00:57 -- nvmf/common.sh@158 -- # true 00:13:58.887 10:00:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:58.887 10:00:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:59.146 10:00:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:59.147 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:59.147 10:00:57 -- nvmf/common.sh@161 -- # true 00:13:59.147 10:00:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:59.147 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:59.147 10:00:57 -- nvmf/common.sh@162 -- # true 00:13:59.147 10:00:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:59.147 10:00:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:59.147 10:00:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:59.147 10:00:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:59.147 10:00:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:59.147 10:00:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:59.147 10:00:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:59.147 10:00:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:59.147 10:00:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:59.147 10:00:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:59.147 10:00:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:59.147 10:00:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:59.147 10:00:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:59.147 10:00:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:59.147 10:00:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:59.147 10:00:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:59.147 10:00:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:59.147 10:00:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:59.147 10:00:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:59.147 10:00:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:59.147 10:00:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:59.147 10:00:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:59.147 10:00:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:59.147 10:00:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:59.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:13:59.147 00:13:59.147 --- 10.0.0.2 ping statistics --- 00:13:59.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.147 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:59.147 10:00:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:59.147 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:59.147 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:13:59.147 00:13:59.147 --- 10.0.0.3 ping statistics --- 00:13:59.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.147 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:59.147 10:00:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:59.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:13:59.147 00:13:59.147 --- 10.0.0.1 ping statistics --- 00:13:59.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.147 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:13:59.147 10:00:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.147 10:00:57 -- nvmf/common.sh@421 -- # return 0 00:13:59.147 10:00:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:59.147 10:00:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.147 10:00:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:59.147 10:00:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:59.147 10:00:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.147 10:00:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:59.147 10:00:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:59.147 10:00:57 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:59.147 10:00:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:59.147 10:00:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:59.147 10:00:57 -- common/autotest_common.sh@10 -- # set +x 00:13:59.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.147 10:00:57 -- nvmf/common.sh@469 -- # nvmfpid=82217 00:13:59.147 10:00:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:59.147 10:00:57 -- nvmf/common.sh@470 -- # waitforlisten 82217 00:13:59.147 10:00:57 -- common/autotest_common.sh@829 -- # '[' -z 82217 ']' 00:13:59.147 10:00:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.147 10:00:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:59.147 10:00:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.147 10:00:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:59.147 10:00:57 -- common/autotest_common.sh@10 -- # set +x 00:13:59.147 [2024-12-16 10:00:57.752855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:59.147 [2024-12-16 10:00:57.752922] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.406 [2024-12-16 10:00:57.889642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.406 [2024-12-16 10:00:57.943404] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:59.406 [2024-12-16 10:00:57.943569] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.406 [2024-12-16 10:00:57.943581] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.406 [2024-12-16 10:00:57.943589] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.406 [2024-12-16 10:00:57.943614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.342 10:00:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:00.342 10:00:58 -- common/autotest_common.sh@862 -- # return 0 00:14:00.342 10:00:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:00.342 10:00:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:00.342 10:00:58 -- common/autotest_common.sh@10 -- # set +x 00:14:00.342 10:00:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.342 10:00:58 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:00.342 10:00:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.342 10:00:58 -- common/autotest_common.sh@10 -- # set +x 00:14:00.342 [2024-12-16 10:00:58.841601] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.342 10:00:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.342 10:00:58 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:00.342 10:00:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.342 10:00:58 -- common/autotest_common.sh@10 -- # set +x 00:14:00.342 10:00:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.342 10:00:58 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.342 10:00:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.342 10:00:58 -- common/autotest_common.sh@10 -- # set +x 00:14:00.342 [2024-12-16 10:00:58.857694] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.342 10:00:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.342 10:00:58 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:00.342 10:00:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.342 10:00:58 -- common/autotest_common.sh@10 -- # set +x 00:14:00.342 NULL1 00:14:00.342 10:00:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.342 10:00:58 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:00.342 10:00:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.342 10:00:58 -- common/autotest_common.sh@10 -- # set +x 00:14:00.342 10:00:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.342 10:00:58 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:00.342 10:00:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.342 10:00:58 -- common/autotest_common.sh@10 -- # set +x 00:14:00.342 10:00:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.342 10:00:58 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:00.342 [2024-12-16 10:00:58.909760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:00.342 [2024-12-16 10:00:58.909810] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82267 ] 00:14:00.910 Attached to nqn.2016-06.io.spdk:cnode1 00:14:00.910 Namespace ID: 1 size: 1GB 00:14:00.910 fused_ordering(0) 00:14:00.910 fused_ordering(1) 00:14:00.910 fused_ordering(2) 00:14:00.910 fused_ordering(3) 00:14:00.910 fused_ordering(4) 00:14:00.910 fused_ordering(5) 00:14:00.910 fused_ordering(6) 00:14:00.910 fused_ordering(7) 00:14:00.910 fused_ordering(8) 00:14:00.910 fused_ordering(9) 00:14:00.910 fused_ordering(10) 00:14:00.910 fused_ordering(11) 00:14:00.910 fused_ordering(12) 00:14:00.910 fused_ordering(13) 00:14:00.910 fused_ordering(14) 00:14:00.910 fused_ordering(15) 00:14:00.910 fused_ordering(16) 00:14:00.910 fused_ordering(17) 00:14:00.910 fused_ordering(18) 00:14:00.910 fused_ordering(19) 00:14:00.910 fused_ordering(20) 00:14:00.910 fused_ordering(21) 00:14:00.910 fused_ordering(22) 00:14:00.910 fused_ordering(23) 00:14:00.910 fused_ordering(24) 00:14:00.910 fused_ordering(25) 00:14:00.910 fused_ordering(26) 00:14:00.910 fused_ordering(27) 00:14:00.910 fused_ordering(28) 00:14:00.910 fused_ordering(29) 00:14:00.910 fused_ordering(30) 00:14:00.910 fused_ordering(31) 00:14:00.910 fused_ordering(32) 00:14:00.910 fused_ordering(33) 00:14:00.910 fused_ordering(34) 00:14:00.910 fused_ordering(35) 00:14:00.910 fused_ordering(36) 00:14:00.910 fused_ordering(37) 00:14:00.910 fused_ordering(38) 00:14:00.910 fused_ordering(39) 00:14:00.910 fused_ordering(40) 00:14:00.910 fused_ordering(41) 00:14:00.910 fused_ordering(42) 00:14:00.910 fused_ordering(43) 00:14:00.910 fused_ordering(44) 00:14:00.910 fused_ordering(45) 00:14:00.910 fused_ordering(46) 00:14:00.910 fused_ordering(47) 00:14:00.910 fused_ordering(48) 00:14:00.910 fused_ordering(49) 00:14:00.910 fused_ordering(50) 00:14:00.910 fused_ordering(51) 00:14:00.910 fused_ordering(52) 00:14:00.910 fused_ordering(53) 00:14:00.910 fused_ordering(54) 00:14:00.910 fused_ordering(55) 00:14:00.910 fused_ordering(56) 00:14:00.910 fused_ordering(57) 00:14:00.910 fused_ordering(58) 00:14:00.910 fused_ordering(59) 00:14:00.910 fused_ordering(60) 00:14:00.910 fused_ordering(61) 00:14:00.910 fused_ordering(62) 00:14:00.910 fused_ordering(63) 00:14:00.910 fused_ordering(64) 00:14:00.910 fused_ordering(65) 00:14:00.910 fused_ordering(66) 00:14:00.910 fused_ordering(67) 00:14:00.910 fused_ordering(68) 00:14:00.910 fused_ordering(69) 00:14:00.910 fused_ordering(70) 00:14:00.910 fused_ordering(71) 00:14:00.910 fused_ordering(72) 00:14:00.910 fused_ordering(73) 00:14:00.910 fused_ordering(74) 00:14:00.910 fused_ordering(75) 00:14:00.910 fused_ordering(76) 00:14:00.910 fused_ordering(77) 00:14:00.910 fused_ordering(78) 00:14:00.910 fused_ordering(79) 00:14:00.910 fused_ordering(80) 00:14:00.910 fused_ordering(81) 00:14:00.910 fused_ordering(82) 00:14:00.910 fused_ordering(83) 00:14:00.910 fused_ordering(84) 00:14:00.910 fused_ordering(85) 00:14:00.910 fused_ordering(86) 00:14:00.910 fused_ordering(87) 00:14:00.910 fused_ordering(88) 00:14:00.910 fused_ordering(89) 00:14:00.910 fused_ordering(90) 00:14:00.910 fused_ordering(91) 00:14:00.910 fused_ordering(92) 00:14:00.910 fused_ordering(93) 00:14:00.910 fused_ordering(94) 00:14:00.910 fused_ordering(95) 00:14:00.910 fused_ordering(96) 00:14:00.910 fused_ordering(97) 00:14:00.910 fused_ordering(98) 00:14:00.910 fused_ordering(99) 00:14:00.910 fused_ordering(100) 00:14:00.910 fused_ordering(101) 00:14:00.910 fused_ordering(102) 00:14:00.910 fused_ordering(103) 00:14:00.910 fused_ordering(104) 00:14:00.910 fused_ordering(105) 00:14:00.910 fused_ordering(106) 00:14:00.910 fused_ordering(107) 00:14:00.910 fused_ordering(108) 00:14:00.910 fused_ordering(109) 00:14:00.910 fused_ordering(110) 00:14:00.910 fused_ordering(111) 00:14:00.910 fused_ordering(112) 00:14:00.910 fused_ordering(113) 00:14:00.910 fused_ordering(114) 00:14:00.910 fused_ordering(115) 00:14:00.910 fused_ordering(116) 00:14:00.910 fused_ordering(117) 00:14:00.910 fused_ordering(118) 00:14:00.910 fused_ordering(119) 00:14:00.910 fused_ordering(120) 00:14:00.910 fused_ordering(121) 00:14:00.910 fused_ordering(122) 00:14:00.910 fused_ordering(123) 00:14:00.910 fused_ordering(124) 00:14:00.910 fused_ordering(125) 00:14:00.910 fused_ordering(126) 00:14:00.910 fused_ordering(127) 00:14:00.910 fused_ordering(128) 00:14:00.911 fused_ordering(129) 00:14:00.911 fused_ordering(130) 00:14:00.911 fused_ordering(131) 00:14:00.911 fused_ordering(132) 00:14:00.911 fused_ordering(133) 00:14:00.911 fused_ordering(134) 00:14:00.911 fused_ordering(135) 00:14:00.911 fused_ordering(136) 00:14:00.911 fused_ordering(137) 00:14:00.911 fused_ordering(138) 00:14:00.911 fused_ordering(139) 00:14:00.911 fused_ordering(140) 00:14:00.911 fused_ordering(141) 00:14:00.911 fused_ordering(142) 00:14:00.911 fused_ordering(143) 00:14:00.911 fused_ordering(144) 00:14:00.911 fused_ordering(145) 00:14:00.911 fused_ordering(146) 00:14:00.911 fused_ordering(147) 00:14:00.911 fused_ordering(148) 00:14:00.911 fused_ordering(149) 00:14:00.911 fused_ordering(150) 00:14:00.911 fused_ordering(151) 00:14:00.911 fused_ordering(152) 00:14:00.911 fused_ordering(153) 00:14:00.911 fused_ordering(154) 00:14:00.911 fused_ordering(155) 00:14:00.911 fused_ordering(156) 00:14:00.911 fused_ordering(157) 00:14:00.911 fused_ordering(158) 00:14:00.911 fused_ordering(159) 00:14:00.911 fused_ordering(160) 00:14:00.911 fused_ordering(161) 00:14:00.911 fused_ordering(162) 00:14:00.911 fused_ordering(163) 00:14:00.911 fused_ordering(164) 00:14:00.911 fused_ordering(165) 00:14:00.911 fused_ordering(166) 00:14:00.911 fused_ordering(167) 00:14:00.911 fused_ordering(168) 00:14:00.911 fused_ordering(169) 00:14:00.911 fused_ordering(170) 00:14:00.911 fused_ordering(171) 00:14:00.911 fused_ordering(172) 00:14:00.911 fused_ordering(173) 00:14:00.911 fused_ordering(174) 00:14:00.911 fused_ordering(175) 00:14:00.911 fused_ordering(176) 00:14:00.911 fused_ordering(177) 00:14:00.911 fused_ordering(178) 00:14:00.911 fused_ordering(179) 00:14:00.911 fused_ordering(180) 00:14:00.911 fused_ordering(181) 00:14:00.911 fused_ordering(182) 00:14:00.911 fused_ordering(183) 00:14:00.911 fused_ordering(184) 00:14:00.911 fused_ordering(185) 00:14:00.911 fused_ordering(186) 00:14:00.911 fused_ordering(187) 00:14:00.911 fused_ordering(188) 00:14:00.911 fused_ordering(189) 00:14:00.911 fused_ordering(190) 00:14:00.911 fused_ordering(191) 00:14:00.911 fused_ordering(192) 00:14:00.911 fused_ordering(193) 00:14:00.911 fused_ordering(194) 00:14:00.911 fused_ordering(195) 00:14:00.911 fused_ordering(196) 00:14:00.911 fused_ordering(197) 00:14:00.911 fused_ordering(198) 00:14:00.911 fused_ordering(199) 00:14:00.911 fused_ordering(200) 00:14:00.911 fused_ordering(201) 00:14:00.911 fused_ordering(202) 00:14:00.911 fused_ordering(203) 00:14:00.911 fused_ordering(204) 00:14:00.911 fused_ordering(205) 00:14:00.911 fused_ordering(206) 00:14:00.911 fused_ordering(207) 00:14:00.911 fused_ordering(208) 00:14:00.911 fused_ordering(209) 00:14:00.911 fused_ordering(210) 00:14:00.911 fused_ordering(211) 00:14:00.911 fused_ordering(212) 00:14:00.911 fused_ordering(213) 00:14:00.911 fused_ordering(214) 00:14:00.911 fused_ordering(215) 00:14:00.911 fused_ordering(216) 00:14:00.911 fused_ordering(217) 00:14:00.911 fused_ordering(218) 00:14:00.911 fused_ordering(219) 00:14:00.911 fused_ordering(220) 00:14:00.911 fused_ordering(221) 00:14:00.911 fused_ordering(222) 00:14:00.911 fused_ordering(223) 00:14:00.911 fused_ordering(224) 00:14:00.911 fused_ordering(225) 00:14:00.911 fused_ordering(226) 00:14:00.911 fused_ordering(227) 00:14:00.911 fused_ordering(228) 00:14:00.911 fused_ordering(229) 00:14:00.911 fused_ordering(230) 00:14:00.911 fused_ordering(231) 00:14:00.911 fused_ordering(232) 00:14:00.911 fused_ordering(233) 00:14:00.911 fused_ordering(234) 00:14:00.911 fused_ordering(235) 00:14:00.911 fused_ordering(236) 00:14:00.911 fused_ordering(237) 00:14:00.911 fused_ordering(238) 00:14:00.911 fused_ordering(239) 00:14:00.911 fused_ordering(240) 00:14:00.911 fused_ordering(241) 00:14:00.911 fused_ordering(242) 00:14:00.911 fused_ordering(243) 00:14:00.911 fused_ordering(244) 00:14:00.911 fused_ordering(245) 00:14:00.911 fused_ordering(246) 00:14:00.911 fused_ordering(247) 00:14:00.911 fused_ordering(248) 00:14:00.911 fused_ordering(249) 00:14:00.911 fused_ordering(250) 00:14:00.911 fused_ordering(251) 00:14:00.911 fused_ordering(252) 00:14:00.911 fused_ordering(253) 00:14:00.911 fused_ordering(254) 00:14:00.911 fused_ordering(255) 00:14:00.911 fused_ordering(256) 00:14:00.911 fused_ordering(257) 00:14:00.911 fused_ordering(258) 00:14:00.911 fused_ordering(259) 00:14:00.911 fused_ordering(260) 00:14:00.911 fused_ordering(261) 00:14:00.911 fused_ordering(262) 00:14:00.911 fused_ordering(263) 00:14:00.911 fused_ordering(264) 00:14:00.911 fused_ordering(265) 00:14:00.911 fused_ordering(266) 00:14:00.911 fused_ordering(267) 00:14:00.911 fused_ordering(268) 00:14:00.911 fused_ordering(269) 00:14:00.911 fused_ordering(270) 00:14:00.911 fused_ordering(271) 00:14:00.911 fused_ordering(272) 00:14:00.911 fused_ordering(273) 00:14:00.911 fused_ordering(274) 00:14:00.911 fused_ordering(275) 00:14:00.911 fused_ordering(276) 00:14:00.911 fused_ordering(277) 00:14:00.911 fused_ordering(278) 00:14:00.911 fused_ordering(279) 00:14:00.911 fused_ordering(280) 00:14:00.911 fused_ordering(281) 00:14:00.911 fused_ordering(282) 00:14:00.911 fused_ordering(283) 00:14:00.911 fused_ordering(284) 00:14:00.911 fused_ordering(285) 00:14:00.911 fused_ordering(286) 00:14:00.911 fused_ordering(287) 00:14:00.911 fused_ordering(288) 00:14:00.911 fused_ordering(289) 00:14:00.911 fused_ordering(290) 00:14:00.911 fused_ordering(291) 00:14:00.911 fused_ordering(292) 00:14:00.911 fused_ordering(293) 00:14:00.911 fused_ordering(294) 00:14:00.911 fused_ordering(295) 00:14:00.911 fused_ordering(296) 00:14:00.911 fused_ordering(297) 00:14:00.911 fused_ordering(298) 00:14:00.911 fused_ordering(299) 00:14:00.911 fused_ordering(300) 00:14:00.911 fused_ordering(301) 00:14:00.911 fused_ordering(302) 00:14:00.911 fused_ordering(303) 00:14:00.911 fused_ordering(304) 00:14:00.911 fused_ordering(305) 00:14:00.911 fused_ordering(306) 00:14:00.911 fused_ordering(307) 00:14:00.911 fused_ordering(308) 00:14:00.911 fused_ordering(309) 00:14:00.911 fused_ordering(310) 00:14:00.911 fused_ordering(311) 00:14:00.911 fused_ordering(312) 00:14:00.911 fused_ordering(313) 00:14:00.911 fused_ordering(314) 00:14:00.911 fused_ordering(315) 00:14:00.911 fused_ordering(316) 00:14:00.911 fused_ordering(317) 00:14:00.911 fused_ordering(318) 00:14:00.911 fused_ordering(319) 00:14:00.911 fused_ordering(320) 00:14:00.911 fused_ordering(321) 00:14:00.911 fused_ordering(322) 00:14:00.911 fused_ordering(323) 00:14:00.911 fused_ordering(324) 00:14:00.911 fused_ordering(325) 00:14:00.911 fused_ordering(326) 00:14:00.911 fused_ordering(327) 00:14:00.911 fused_ordering(328) 00:14:00.911 fused_ordering(329) 00:14:00.911 fused_ordering(330) 00:14:00.911 fused_ordering(331) 00:14:00.911 fused_ordering(332) 00:14:00.911 fused_ordering(333) 00:14:00.911 fused_ordering(334) 00:14:00.911 fused_ordering(335) 00:14:00.911 fused_ordering(336) 00:14:00.911 fused_ordering(337) 00:14:00.911 fused_ordering(338) 00:14:00.911 fused_ordering(339) 00:14:00.911 fused_ordering(340) 00:14:00.911 fused_ordering(341) 00:14:00.911 fused_ordering(342) 00:14:00.911 fused_ordering(343) 00:14:00.911 fused_ordering(344) 00:14:00.911 fused_ordering(345) 00:14:00.911 fused_ordering(346) 00:14:00.911 fused_ordering(347) 00:14:00.911 fused_ordering(348) 00:14:00.911 fused_ordering(349) 00:14:00.911 fused_ordering(350) 00:14:00.911 fused_ordering(351) 00:14:00.911 fused_ordering(352) 00:14:00.911 fused_ordering(353) 00:14:00.911 fused_ordering(354) 00:14:00.911 fused_ordering(355) 00:14:00.911 fused_ordering(356) 00:14:00.911 fused_ordering(357) 00:14:00.911 fused_ordering(358) 00:14:00.911 fused_ordering(359) 00:14:00.911 fused_ordering(360) 00:14:00.911 fused_ordering(361) 00:14:00.911 fused_ordering(362) 00:14:00.911 fused_ordering(363) 00:14:00.911 fused_ordering(364) 00:14:00.912 fused_ordering(365) 00:14:00.912 fused_ordering(366) 00:14:00.912 fused_ordering(367) 00:14:00.912 fused_ordering(368) 00:14:00.912 fused_ordering(369) 00:14:00.912 fused_ordering(370) 00:14:00.912 fused_ordering(371) 00:14:00.912 fused_ordering(372) 00:14:00.912 fused_ordering(373) 00:14:00.912 fused_ordering(374) 00:14:00.912 fused_ordering(375) 00:14:00.912 fused_ordering(376) 00:14:00.912 fused_ordering(377) 00:14:00.912 fused_ordering(378) 00:14:00.912 fused_ordering(379) 00:14:00.912 fused_ordering(380) 00:14:00.912 fused_ordering(381) 00:14:00.912 fused_ordering(382) 00:14:00.912 fused_ordering(383) 00:14:00.912 fused_ordering(384) 00:14:00.912 fused_ordering(385) 00:14:00.912 fused_ordering(386) 00:14:00.912 fused_ordering(387) 00:14:00.912 fused_ordering(388) 00:14:00.912 fused_ordering(389) 00:14:00.912 fused_ordering(390) 00:14:00.912 fused_ordering(391) 00:14:00.912 fused_ordering(392) 00:14:00.912 fused_ordering(393) 00:14:00.912 fused_ordering(394) 00:14:00.912 fused_ordering(395) 00:14:00.912 fused_ordering(396) 00:14:00.912 fused_ordering(397) 00:14:00.912 fused_ordering(398) 00:14:00.912 fused_ordering(399) 00:14:00.912 fused_ordering(400) 00:14:00.912 fused_ordering(401) 00:14:00.912 fused_ordering(402) 00:14:00.912 fused_ordering(403) 00:14:00.912 fused_ordering(404) 00:14:00.912 fused_ordering(405) 00:14:00.912 fused_ordering(406) 00:14:00.912 fused_ordering(407) 00:14:00.912 fused_ordering(408) 00:14:00.912 fused_ordering(409) 00:14:00.912 fused_ordering(410) 00:14:01.171 fused_ordering(411) 00:14:01.171 fused_ordering(412) 00:14:01.171 fused_ordering(413) 00:14:01.171 fused_ordering(414) 00:14:01.171 fused_ordering(415) 00:14:01.171 fused_ordering(416) 00:14:01.171 fused_ordering(417) 00:14:01.171 fused_ordering(418) 00:14:01.171 fused_ordering(419) 00:14:01.171 fused_ordering(420) 00:14:01.171 fused_ordering(421) 00:14:01.171 fused_ordering(422) 00:14:01.171 fused_ordering(423) 00:14:01.171 fused_ordering(424) 00:14:01.171 fused_ordering(425) 00:14:01.171 fused_ordering(426) 00:14:01.171 fused_ordering(427) 00:14:01.171 fused_ordering(428) 00:14:01.171 fused_ordering(429) 00:14:01.171 fused_ordering(430) 00:14:01.171 fused_ordering(431) 00:14:01.171 fused_ordering(432) 00:14:01.171 fused_ordering(433) 00:14:01.171 fused_ordering(434) 00:14:01.171 fused_ordering(435) 00:14:01.171 fused_ordering(436) 00:14:01.171 fused_ordering(437) 00:14:01.171 fused_ordering(438) 00:14:01.171 fused_ordering(439) 00:14:01.171 fused_ordering(440) 00:14:01.171 fused_ordering(441) 00:14:01.171 fused_ordering(442) 00:14:01.171 fused_ordering(443) 00:14:01.171 fused_ordering(444) 00:14:01.171 fused_ordering(445) 00:14:01.171 fused_ordering(446) 00:14:01.171 fused_ordering(447) 00:14:01.171 fused_ordering(448) 00:14:01.171 fused_ordering(449) 00:14:01.171 fused_ordering(450) 00:14:01.171 fused_ordering(451) 00:14:01.171 fused_ordering(452) 00:14:01.171 fused_ordering(453) 00:14:01.171 fused_ordering(454) 00:14:01.171 fused_ordering(455) 00:14:01.171 fused_ordering(456) 00:14:01.171 fused_ordering(457) 00:14:01.171 fused_ordering(458) 00:14:01.171 fused_ordering(459) 00:14:01.171 fused_ordering(460) 00:14:01.171 fused_ordering(461) 00:14:01.171 fused_ordering(462) 00:14:01.171 fused_ordering(463) 00:14:01.171 fused_ordering(464) 00:14:01.171 fused_ordering(465) 00:14:01.171 fused_ordering(466) 00:14:01.171 fused_ordering(467) 00:14:01.171 fused_ordering(468) 00:14:01.171 fused_ordering(469) 00:14:01.171 fused_ordering(470) 00:14:01.171 fused_ordering(471) 00:14:01.171 fused_ordering(472) 00:14:01.171 fused_ordering(473) 00:14:01.171 fused_ordering(474) 00:14:01.171 fused_ordering(475) 00:14:01.171 fused_ordering(476) 00:14:01.171 fused_ordering(477) 00:14:01.171 fused_ordering(478) 00:14:01.171 fused_ordering(479) 00:14:01.171 fused_ordering(480) 00:14:01.171 fused_ordering(481) 00:14:01.171 fused_ordering(482) 00:14:01.171 fused_ordering(483) 00:14:01.171 fused_ordering(484) 00:14:01.171 fused_ordering(485) 00:14:01.171 fused_ordering(486) 00:14:01.171 fused_ordering(487) 00:14:01.171 fused_ordering(488) 00:14:01.171 fused_ordering(489) 00:14:01.171 fused_ordering(490) 00:14:01.171 fused_ordering(491) 00:14:01.171 fused_ordering(492) 00:14:01.171 fused_ordering(493) 00:14:01.171 fused_ordering(494) 00:14:01.171 fused_ordering(495) 00:14:01.171 fused_ordering(496) 00:14:01.171 fused_ordering(497) 00:14:01.171 fused_ordering(498) 00:14:01.171 fused_ordering(499) 00:14:01.171 fused_ordering(500) 00:14:01.171 fused_ordering(501) 00:14:01.171 fused_ordering(502) 00:14:01.171 fused_ordering(503) 00:14:01.171 fused_ordering(504) 00:14:01.171 fused_ordering(505) 00:14:01.171 fused_ordering(506) 00:14:01.171 fused_ordering(507) 00:14:01.171 fused_ordering(508) 00:14:01.171 fused_ordering(509) 00:14:01.171 fused_ordering(510) 00:14:01.171 fused_ordering(511) 00:14:01.171 fused_ordering(512) 00:14:01.171 fused_ordering(513) 00:14:01.171 fused_ordering(514) 00:14:01.171 fused_ordering(515) 00:14:01.171 fused_ordering(516) 00:14:01.171 fused_ordering(517) 00:14:01.171 fused_ordering(518) 00:14:01.171 fused_ordering(519) 00:14:01.171 fused_ordering(520) 00:14:01.171 fused_ordering(521) 00:14:01.171 fused_ordering(522) 00:14:01.171 fused_ordering(523) 00:14:01.171 fused_ordering(524) 00:14:01.171 fused_ordering(525) 00:14:01.171 fused_ordering(526) 00:14:01.171 fused_ordering(527) 00:14:01.171 fused_ordering(528) 00:14:01.171 fused_ordering(529) 00:14:01.171 fused_ordering(530) 00:14:01.171 fused_ordering(531) 00:14:01.171 fused_ordering(532) 00:14:01.171 fused_ordering(533) 00:14:01.171 fused_ordering(534) 00:14:01.171 fused_ordering(535) 00:14:01.171 fused_ordering(536) 00:14:01.171 fused_ordering(537) 00:14:01.171 fused_ordering(538) 00:14:01.171 fused_ordering(539) 00:14:01.171 fused_ordering(540) 00:14:01.171 fused_ordering(541) 00:14:01.171 fused_ordering(542) 00:14:01.171 fused_ordering(543) 00:14:01.171 fused_ordering(544) 00:14:01.171 fused_ordering(545) 00:14:01.171 fused_ordering(546) 00:14:01.171 fused_ordering(547) 00:14:01.171 fused_ordering(548) 00:14:01.171 fused_ordering(549) 00:14:01.171 fused_ordering(550) 00:14:01.171 fused_ordering(551) 00:14:01.171 fused_ordering(552) 00:14:01.171 fused_ordering(553) 00:14:01.171 fused_ordering(554) 00:14:01.171 fused_ordering(555) 00:14:01.171 fused_ordering(556) 00:14:01.171 fused_ordering(557) 00:14:01.171 fused_ordering(558) 00:14:01.171 fused_ordering(559) 00:14:01.171 fused_ordering(560) 00:14:01.171 fused_ordering(561) 00:14:01.171 fused_ordering(562) 00:14:01.171 fused_ordering(563) 00:14:01.171 fused_ordering(564) 00:14:01.171 fused_ordering(565) 00:14:01.171 fused_ordering(566) 00:14:01.171 fused_ordering(567) 00:14:01.171 fused_ordering(568) 00:14:01.171 fused_ordering(569) 00:14:01.171 fused_ordering(570) 00:14:01.171 fused_ordering(571) 00:14:01.171 fused_ordering(572) 00:14:01.171 fused_ordering(573) 00:14:01.171 fused_ordering(574) 00:14:01.171 fused_ordering(575) 00:14:01.172 fused_ordering(576) 00:14:01.172 fused_ordering(577) 00:14:01.172 fused_ordering(578) 00:14:01.172 fused_ordering(579) 00:14:01.172 fused_ordering(580) 00:14:01.172 fused_ordering(581) 00:14:01.172 fused_ordering(582) 00:14:01.172 fused_ordering(583) 00:14:01.172 fused_ordering(584) 00:14:01.172 fused_ordering(585) 00:14:01.172 fused_ordering(586) 00:14:01.172 fused_ordering(587) 00:14:01.172 fused_ordering(588) 00:14:01.172 fused_ordering(589) 00:14:01.172 fused_ordering(590) 00:14:01.172 fused_ordering(591) 00:14:01.172 fused_ordering(592) 00:14:01.172 fused_ordering(593) 00:14:01.172 fused_ordering(594) 00:14:01.172 fused_ordering(595) 00:14:01.172 fused_ordering(596) 00:14:01.172 fused_ordering(597) 00:14:01.172 fused_ordering(598) 00:14:01.172 fused_ordering(599) 00:14:01.172 fused_ordering(600) 00:14:01.172 fused_ordering(601) 00:14:01.172 fused_ordering(602) 00:14:01.172 fused_ordering(603) 00:14:01.172 fused_ordering(604) 00:14:01.172 fused_ordering(605) 00:14:01.172 fused_ordering(606) 00:14:01.172 fused_ordering(607) 00:14:01.172 fused_ordering(608) 00:14:01.172 fused_ordering(609) 00:14:01.172 fused_ordering(610) 00:14:01.172 fused_ordering(611) 00:14:01.172 fused_ordering(612) 00:14:01.172 fused_ordering(613) 00:14:01.172 fused_ordering(614) 00:14:01.172 fused_ordering(615) 00:14:01.740 fused_ordering(616) 00:14:01.740 fused_ordering(617) 00:14:01.740 fused_ordering(618) 00:14:01.740 fused_ordering(619) 00:14:01.740 fused_ordering(620) 00:14:01.740 fused_ordering(621) 00:14:01.740 fused_ordering(622) 00:14:01.740 fused_ordering(623) 00:14:01.740 fused_ordering(624) 00:14:01.740 fused_ordering(625) 00:14:01.740 fused_ordering(626) 00:14:01.740 fused_ordering(627) 00:14:01.740 fused_ordering(628) 00:14:01.740 fused_ordering(629) 00:14:01.740 fused_ordering(630) 00:14:01.740 fused_ordering(631) 00:14:01.740 fused_ordering(632) 00:14:01.740 fused_ordering(633) 00:14:01.740 fused_ordering(634) 00:14:01.740 fused_ordering(635) 00:14:01.740 fused_ordering(636) 00:14:01.740 fused_ordering(637) 00:14:01.740 fused_ordering(638) 00:14:01.740 fused_ordering(639) 00:14:01.740 fused_ordering(640) 00:14:01.740 fused_ordering(641) 00:14:01.740 fused_ordering(642) 00:14:01.740 fused_ordering(643) 00:14:01.740 fused_ordering(644) 00:14:01.740 fused_ordering(645) 00:14:01.740 fused_ordering(646) 00:14:01.740 fused_ordering(647) 00:14:01.740 fused_ordering(648) 00:14:01.740 fused_ordering(649) 00:14:01.740 fused_ordering(650) 00:14:01.740 fused_ordering(651) 00:14:01.740 fused_ordering(652) 00:14:01.740 fused_ordering(653) 00:14:01.740 fused_ordering(654) 00:14:01.740 fused_ordering(655) 00:14:01.740 fused_ordering(656) 00:14:01.740 fused_ordering(657) 00:14:01.740 fused_ordering(658) 00:14:01.740 fused_ordering(659) 00:14:01.740 fused_ordering(660) 00:14:01.740 fused_ordering(661) 00:14:01.740 fused_ordering(662) 00:14:01.740 fused_ordering(663) 00:14:01.740 fused_ordering(664) 00:14:01.740 fused_ordering(665) 00:14:01.740 fused_ordering(666) 00:14:01.740 fused_ordering(667) 00:14:01.740 fused_ordering(668) 00:14:01.740 fused_ordering(669) 00:14:01.740 fused_ordering(670) 00:14:01.740 fused_ordering(671) 00:14:01.740 fused_ordering(672) 00:14:01.740 fused_ordering(673) 00:14:01.740 fused_ordering(674) 00:14:01.740 fused_ordering(675) 00:14:01.740 fused_ordering(676) 00:14:01.740 fused_ordering(677) 00:14:01.740 fused_ordering(678) 00:14:01.740 fused_ordering(679) 00:14:01.740 fused_ordering(680) 00:14:01.740 fused_ordering(681) 00:14:01.740 fused_ordering(682) 00:14:01.740 fused_ordering(683) 00:14:01.740 fused_ordering(684) 00:14:01.740 fused_ordering(685) 00:14:01.740 fused_ordering(686) 00:14:01.740 fused_ordering(687) 00:14:01.740 fused_ordering(688) 00:14:01.740 fused_ordering(689) 00:14:01.740 fused_ordering(690) 00:14:01.740 fused_ordering(691) 00:14:01.740 fused_ordering(692) 00:14:01.740 fused_ordering(693) 00:14:01.740 fused_ordering(694) 00:14:01.740 fused_ordering(695) 00:14:01.740 fused_ordering(696) 00:14:01.740 fused_ordering(697) 00:14:01.740 fused_ordering(698) 00:14:01.740 fused_ordering(699) 00:14:01.740 fused_ordering(700) 00:14:01.740 fused_ordering(701) 00:14:01.740 fused_ordering(702) 00:14:01.740 fused_ordering(703) 00:14:01.740 fused_ordering(704) 00:14:01.740 fused_ordering(705) 00:14:01.740 fused_ordering(706) 00:14:01.740 fused_ordering(707) 00:14:01.740 fused_ordering(708) 00:14:01.740 fused_ordering(709) 00:14:01.740 fused_ordering(710) 00:14:01.740 fused_ordering(711) 00:14:01.740 fused_ordering(712) 00:14:01.740 fused_ordering(713) 00:14:01.740 fused_ordering(714) 00:14:01.740 fused_ordering(715) 00:14:01.740 fused_ordering(716) 00:14:01.740 fused_ordering(717) 00:14:01.740 fused_ordering(718) 00:14:01.740 fused_ordering(719) 00:14:01.740 fused_ordering(720) 00:14:01.740 fused_ordering(721) 00:14:01.740 fused_ordering(722) 00:14:01.740 fused_ordering(723) 00:14:01.740 fused_ordering(724) 00:14:01.740 fused_ordering(725) 00:14:01.740 fused_ordering(726) 00:14:01.740 fused_ordering(727) 00:14:01.740 fused_ordering(728) 00:14:01.740 fused_ordering(729) 00:14:01.740 fused_ordering(730) 00:14:01.740 fused_ordering(731) 00:14:01.740 fused_ordering(732) 00:14:01.740 fused_ordering(733) 00:14:01.740 fused_ordering(734) 00:14:01.740 fused_ordering(735) 00:14:01.740 fused_ordering(736) 00:14:01.740 fused_ordering(737) 00:14:01.740 fused_ordering(738) 00:14:01.740 fused_ordering(739) 00:14:01.740 fused_ordering(740) 00:14:01.740 fused_ordering(741) 00:14:01.740 fused_ordering(742) 00:14:01.740 fused_ordering(743) 00:14:01.740 fused_ordering(744) 00:14:01.740 fused_ordering(745) 00:14:01.740 fused_ordering(746) 00:14:01.740 fused_ordering(747) 00:14:01.740 fused_ordering(748) 00:14:01.740 fused_ordering(749) 00:14:01.740 fused_ordering(750) 00:14:01.740 fused_ordering(751) 00:14:01.740 fused_ordering(752) 00:14:01.740 fused_ordering(753) 00:14:01.740 fused_ordering(754) 00:14:01.740 fused_ordering(755) 00:14:01.740 fused_ordering(756) 00:14:01.740 fused_ordering(757) 00:14:01.740 fused_ordering(758) 00:14:01.740 fused_ordering(759) 00:14:01.740 fused_ordering(760) 00:14:01.740 fused_ordering(761) 00:14:01.740 fused_ordering(762) 00:14:01.740 fused_ordering(763) 00:14:01.740 fused_ordering(764) 00:14:01.740 fused_ordering(765) 00:14:01.740 fused_ordering(766) 00:14:01.740 fused_ordering(767) 00:14:01.740 fused_ordering(768) 00:14:01.740 fused_ordering(769) 00:14:01.740 fused_ordering(770) 00:14:01.740 fused_ordering(771) 00:14:01.740 fused_ordering(772) 00:14:01.740 fused_ordering(773) 00:14:01.740 fused_ordering(774) 00:14:01.740 fused_ordering(775) 00:14:01.740 fused_ordering(776) 00:14:01.740 fused_ordering(777) 00:14:01.740 fused_ordering(778) 00:14:01.740 fused_ordering(779) 00:14:01.740 fused_ordering(780) 00:14:01.740 fused_ordering(781) 00:14:01.740 fused_ordering(782) 00:14:01.740 fused_ordering(783) 00:14:01.740 fused_ordering(784) 00:14:01.740 fused_ordering(785) 00:14:01.740 fused_ordering(786) 00:14:01.740 fused_ordering(787) 00:14:01.740 fused_ordering(788) 00:14:01.740 fused_ordering(789) 00:14:01.740 fused_ordering(790) 00:14:01.740 fused_ordering(791) 00:14:01.740 fused_ordering(792) 00:14:01.740 fused_ordering(793) 00:14:01.740 fused_ordering(794) 00:14:01.740 fused_ordering(795) 00:14:01.740 fused_ordering(796) 00:14:01.740 fused_ordering(797) 00:14:01.740 fused_ordering(798) 00:14:01.740 fused_ordering(799) 00:14:01.740 fused_ordering(800) 00:14:01.740 fused_ordering(801) 00:14:01.740 fused_ordering(802) 00:14:01.740 fused_ordering(803) 00:14:01.740 fused_ordering(804) 00:14:01.740 fused_ordering(805) 00:14:01.740 fused_ordering(806) 00:14:01.740 fused_ordering(807) 00:14:01.740 fused_ordering(808) 00:14:01.740 fused_ordering(809) 00:14:01.740 fused_ordering(810) 00:14:01.740 fused_ordering(811) 00:14:01.740 fused_ordering(812) 00:14:01.740 fused_ordering(813) 00:14:01.740 fused_ordering(814) 00:14:01.740 fused_ordering(815) 00:14:01.740 fused_ordering(816) 00:14:01.740 fused_ordering(817) 00:14:01.740 fused_ordering(818) 00:14:01.740 fused_ordering(819) 00:14:01.740 fused_ordering(820) 00:14:01.999 fused_ordering(821) 00:14:01.999 fused_ordering(822) 00:14:01.999 fused_ordering(823) 00:14:01.999 fused_ordering(824) 00:14:01.999 fused_ordering(825) 00:14:01.999 fused_ordering(826) 00:14:01.999 fused_ordering(827) 00:14:01.999 fused_ordering(828) 00:14:01.999 fused_ordering(829) 00:14:01.999 fused_ordering(830) 00:14:01.999 fused_ordering(831) 00:14:01.999 fused_ordering(832) 00:14:01.999 fused_ordering(833) 00:14:01.999 fused_ordering(834) 00:14:01.999 fused_ordering(835) 00:14:01.999 fused_ordering(836) 00:14:01.999 fused_ordering(837) 00:14:01.999 fused_ordering(838) 00:14:01.999 fused_ordering(839) 00:14:01.999 fused_ordering(840) 00:14:01.999 fused_ordering(841) 00:14:01.999 fused_ordering(842) 00:14:01.999 fused_ordering(843) 00:14:01.999 fused_ordering(844) 00:14:01.999 fused_ordering(845) 00:14:01.999 fused_ordering(846) 00:14:01.999 fused_ordering(847) 00:14:01.999 fused_ordering(848) 00:14:01.999 fused_ordering(849) 00:14:01.999 fused_ordering(850) 00:14:01.999 fused_ordering(851) 00:14:01.999 fused_ordering(852) 00:14:01.999 fused_ordering(853) 00:14:01.999 fused_ordering(854) 00:14:01.999 fused_ordering(855) 00:14:01.999 fused_ordering(856) 00:14:01.999 fused_ordering(857) 00:14:01.999 fused_ordering(858) 00:14:01.999 fused_ordering(859) 00:14:01.999 fused_ordering(860) 00:14:01.999 fused_ordering(861) 00:14:01.999 fused_ordering(862) 00:14:01.999 fused_ordering(863) 00:14:01.999 fused_ordering(864) 00:14:01.999 fused_ordering(865) 00:14:01.999 fused_ordering(866) 00:14:01.999 fused_ordering(867) 00:14:02.000 fused_ordering(868) 00:14:02.000 fused_ordering(869) 00:14:02.000 fused_ordering(870) 00:14:02.000 fused_ordering(871) 00:14:02.000 fused_ordering(872) 00:14:02.000 fused_ordering(873) 00:14:02.000 fused_ordering(874) 00:14:02.000 fused_ordering(875) 00:14:02.000 fused_ordering(876) 00:14:02.000 fused_ordering(877) 00:14:02.000 fused_ordering(878) 00:14:02.000 fused_ordering(879) 00:14:02.000 fused_ordering(880) 00:14:02.000 fused_ordering(881) 00:14:02.000 fused_ordering(882) 00:14:02.000 fused_ordering(883) 00:14:02.000 fused_ordering(884) 00:14:02.000 fused_ordering(885) 00:14:02.000 fused_ordering(886) 00:14:02.000 fused_ordering(887) 00:14:02.000 fused_ordering(888) 00:14:02.000 fused_ordering(889) 00:14:02.000 fused_ordering(890) 00:14:02.000 fused_ordering(891) 00:14:02.000 fused_ordering(892) 00:14:02.000 fused_ordering(893) 00:14:02.000 fused_ordering(894) 00:14:02.000 fused_ordering(895) 00:14:02.000 fused_ordering(896) 00:14:02.000 fused_ordering(897) 00:14:02.000 fused_ordering(898) 00:14:02.000 fused_ordering(899) 00:14:02.000 fused_ordering(900) 00:14:02.000 fused_ordering(901) 00:14:02.000 fused_ordering(902) 00:14:02.000 fused_ordering(903) 00:14:02.000 fused_ordering(904) 00:14:02.000 fused_ordering(905) 00:14:02.000 fused_ordering(906) 00:14:02.000 fused_ordering(907) 00:14:02.000 fused_ordering(908) 00:14:02.000 fused_ordering(909) 00:14:02.000 fused_ordering(910) 00:14:02.000 fused_ordering(911) 00:14:02.000 fused_ordering(912) 00:14:02.000 fused_ordering(913) 00:14:02.000 fused_ordering(914) 00:14:02.000 fused_ordering(915) 00:14:02.000 fused_ordering(916) 00:14:02.000 fused_ordering(917) 00:14:02.000 fused_ordering(918) 00:14:02.000 fused_ordering(919) 00:14:02.000 fused_ordering(920) 00:14:02.000 fused_ordering(921) 00:14:02.000 fused_ordering(922) 00:14:02.000 fused_ordering(923) 00:14:02.000 fused_ordering(924) 00:14:02.000 fused_ordering(925) 00:14:02.000 fused_ordering(926) 00:14:02.000 fused_ordering(927) 00:14:02.000 fused_ordering(928) 00:14:02.000 fused_ordering(929) 00:14:02.000 fused_ordering(930) 00:14:02.000 fused_ordering(931) 00:14:02.000 fused_ordering(932) 00:14:02.000 fused_ordering(933) 00:14:02.000 fused_ordering(934) 00:14:02.000 fused_ordering(935) 00:14:02.000 fused_ordering(936) 00:14:02.000 fused_ordering(937) 00:14:02.000 fused_ordering(938) 00:14:02.000 fused_ordering(939) 00:14:02.000 fused_ordering(940) 00:14:02.000 fused_ordering(941) 00:14:02.000 fused_ordering(942) 00:14:02.000 fused_ordering(943) 00:14:02.000 fused_ordering(944) 00:14:02.000 fused_ordering(945) 00:14:02.000 fused_ordering(946) 00:14:02.000 fused_ordering(947) 00:14:02.000 fused_ordering(948) 00:14:02.000 fused_ordering(949) 00:14:02.000 fused_ordering(950) 00:14:02.000 fused_ordering(951) 00:14:02.000 fused_ordering(952) 00:14:02.000 fused_ordering(953) 00:14:02.000 fused_ordering(954) 00:14:02.000 fused_ordering(955) 00:14:02.000 fused_ordering(956) 00:14:02.000 fused_ordering(957) 00:14:02.000 fused_ordering(958) 00:14:02.000 fused_ordering(959) 00:14:02.000 fused_ordering(960) 00:14:02.000 fused_ordering(961) 00:14:02.000 fused_ordering(962) 00:14:02.000 fused_ordering(963) 00:14:02.000 fused_ordering(964) 00:14:02.000 fused_ordering(965) 00:14:02.000 fused_ordering(966) 00:14:02.000 fused_ordering(967) 00:14:02.000 fused_ordering(968) 00:14:02.000 fused_ordering(969) 00:14:02.000 fused_ordering(970) 00:14:02.000 fused_ordering(971) 00:14:02.000 fused_ordering(972) 00:14:02.000 fused_ordering(973) 00:14:02.000 fused_ordering(974) 00:14:02.000 fused_ordering(975) 00:14:02.000 fused_ordering(976) 00:14:02.000 fused_ordering(977) 00:14:02.000 fused_ordering(978) 00:14:02.000 fused_ordering(979) 00:14:02.000 fused_ordering(980) 00:14:02.000 fused_ordering(981) 00:14:02.000 fused_ordering(982) 00:14:02.000 fused_ordering(983) 00:14:02.000 fused_ordering(984) 00:14:02.000 fused_ordering(985) 00:14:02.000 fused_ordering(986) 00:14:02.000 fused_ordering(987) 00:14:02.000 fused_ordering(988) 00:14:02.000 fused_ordering(989) 00:14:02.000 fused_ordering(990) 00:14:02.000 fused_ordering(991) 00:14:02.000 fused_ordering(992) 00:14:02.000 fused_ordering(993) 00:14:02.000 fused_ordering(994) 00:14:02.000 fused_ordering(995) 00:14:02.000 fused_ordering(996) 00:14:02.000 fused_ordering(997) 00:14:02.000 fused_ordering(998) 00:14:02.000 fused_ordering(999) 00:14:02.000 fused_ordering(1000) 00:14:02.000 fused_ordering(1001) 00:14:02.000 fused_ordering(1002) 00:14:02.000 fused_ordering(1003) 00:14:02.000 fused_ordering(1004) 00:14:02.000 fused_ordering(1005) 00:14:02.000 fused_ordering(1006) 00:14:02.000 fused_ordering(1007) 00:14:02.000 fused_ordering(1008) 00:14:02.000 fused_ordering(1009) 00:14:02.000 fused_ordering(1010) 00:14:02.000 fused_ordering(1011) 00:14:02.000 fused_ordering(1012) 00:14:02.000 fused_ordering(1013) 00:14:02.000 fused_ordering(1014) 00:14:02.000 fused_ordering(1015) 00:14:02.000 fused_ordering(1016) 00:14:02.000 fused_ordering(1017) 00:14:02.000 fused_ordering(1018) 00:14:02.000 fused_ordering(1019) 00:14:02.000 fused_ordering(1020) 00:14:02.000 fused_ordering(1021) 00:14:02.000 fused_ordering(1022) 00:14:02.000 fused_ordering(1023) 00:14:02.000 10:01:00 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:02.000 10:01:00 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:02.000 10:01:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:02.000 10:01:00 -- nvmf/common.sh@116 -- # sync 00:14:02.261 10:01:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:02.261 10:01:00 -- nvmf/common.sh@119 -- # set +e 00:14:02.261 10:01:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:02.261 10:01:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:02.261 rmmod nvme_tcp 00:14:02.261 rmmod nvme_fabrics 00:14:02.261 rmmod nvme_keyring 00:14:02.261 10:01:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:02.261 10:01:00 -- nvmf/common.sh@123 -- # set -e 00:14:02.261 10:01:00 -- nvmf/common.sh@124 -- # return 0 00:14:02.261 10:01:00 -- nvmf/common.sh@477 -- # '[' -n 82217 ']' 00:14:02.261 10:01:00 -- nvmf/common.sh@478 -- # killprocess 82217 00:14:02.261 10:01:00 -- common/autotest_common.sh@936 -- # '[' -z 82217 ']' 00:14:02.261 10:01:00 -- common/autotest_common.sh@940 -- # kill -0 82217 00:14:02.261 10:01:00 -- common/autotest_common.sh@941 -- # uname 00:14:02.261 10:01:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:02.261 10:01:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82217 00:14:02.261 10:01:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:02.261 10:01:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:02.261 killing process with pid 82217 00:14:02.261 10:01:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82217' 00:14:02.261 10:01:00 -- common/autotest_common.sh@955 -- # kill 82217 00:14:02.261 10:01:00 -- common/autotest_common.sh@960 -- # wait 82217 00:14:02.520 10:01:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:02.520 10:01:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:02.520 10:01:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:02.520 10:01:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:02.520 10:01:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:02.520 10:01:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.520 10:01:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.520 10:01:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.520 10:01:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:02.520 00:14:02.520 real 0m3.777s 00:14:02.520 user 0m4.498s 00:14:02.520 sys 0m1.206s 00:14:02.520 10:01:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:02.520 10:01:00 -- common/autotest_common.sh@10 -- # set +x 00:14:02.520 ************************************ 00:14:02.520 END TEST nvmf_fused_ordering 00:14:02.520 ************************************ 00:14:02.520 10:01:01 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:02.520 10:01:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:02.520 10:01:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:02.520 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:14:02.520 ************************************ 00:14:02.520 START TEST nvmf_delete_subsystem 00:14:02.520 ************************************ 00:14:02.520 10:01:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:02.520 * Looking for test storage... 00:14:02.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:02.520 10:01:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:02.520 10:01:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:02.520 10:01:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:02.779 10:01:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:02.779 10:01:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:02.779 10:01:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:02.779 10:01:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:02.779 10:01:01 -- scripts/common.sh@335 -- # IFS=.-: 00:14:02.779 10:01:01 -- scripts/common.sh@335 -- # read -ra ver1 00:14:02.779 10:01:01 -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.779 10:01:01 -- scripts/common.sh@336 -- # read -ra ver2 00:14:02.779 10:01:01 -- scripts/common.sh@337 -- # local 'op=<' 00:14:02.779 10:01:01 -- scripts/common.sh@339 -- # ver1_l=2 00:14:02.779 10:01:01 -- scripts/common.sh@340 -- # ver2_l=1 00:14:02.779 10:01:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:02.779 10:01:01 -- scripts/common.sh@343 -- # case "$op" in 00:14:02.779 10:01:01 -- scripts/common.sh@344 -- # : 1 00:14:02.779 10:01:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:02.779 10:01:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.779 10:01:01 -- scripts/common.sh@364 -- # decimal 1 00:14:02.779 10:01:01 -- scripts/common.sh@352 -- # local d=1 00:14:02.779 10:01:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.779 10:01:01 -- scripts/common.sh@354 -- # echo 1 00:14:02.779 10:01:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:02.779 10:01:01 -- scripts/common.sh@365 -- # decimal 2 00:14:02.779 10:01:01 -- scripts/common.sh@352 -- # local d=2 00:14:02.779 10:01:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.779 10:01:01 -- scripts/common.sh@354 -- # echo 2 00:14:02.779 10:01:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:02.779 10:01:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:02.779 10:01:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:02.779 10:01:01 -- scripts/common.sh@367 -- # return 0 00:14:02.779 10:01:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.779 10:01:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:02.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.780 --rc genhtml_branch_coverage=1 00:14:02.780 --rc genhtml_function_coverage=1 00:14:02.780 --rc genhtml_legend=1 00:14:02.780 --rc geninfo_all_blocks=1 00:14:02.780 --rc geninfo_unexecuted_blocks=1 00:14:02.780 00:14:02.780 ' 00:14:02.780 10:01:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:02.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.780 --rc genhtml_branch_coverage=1 00:14:02.780 --rc genhtml_function_coverage=1 00:14:02.780 --rc genhtml_legend=1 00:14:02.780 --rc geninfo_all_blocks=1 00:14:02.780 --rc geninfo_unexecuted_blocks=1 00:14:02.780 00:14:02.780 ' 00:14:02.780 10:01:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:02.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.780 --rc genhtml_branch_coverage=1 00:14:02.780 --rc genhtml_function_coverage=1 00:14:02.780 --rc genhtml_legend=1 00:14:02.780 --rc geninfo_all_blocks=1 00:14:02.780 --rc geninfo_unexecuted_blocks=1 00:14:02.780 00:14:02.780 ' 00:14:02.780 10:01:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:02.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.780 --rc genhtml_branch_coverage=1 00:14:02.780 --rc genhtml_function_coverage=1 00:14:02.780 --rc genhtml_legend=1 00:14:02.780 --rc geninfo_all_blocks=1 00:14:02.780 --rc geninfo_unexecuted_blocks=1 00:14:02.780 00:14:02.780 ' 00:14:02.780 10:01:01 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:02.780 10:01:01 -- nvmf/common.sh@7 -- # uname -s 00:14:02.780 10:01:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.780 10:01:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.780 10:01:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.780 10:01:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.780 10:01:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.780 10:01:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.780 10:01:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.780 10:01:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.780 10:01:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.780 10:01:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.780 10:01:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:14:02.780 10:01:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:14:02.780 10:01:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.780 10:01:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.780 10:01:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:02.780 10:01:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:02.780 10:01:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.780 10:01:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.780 10:01:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.780 10:01:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.780 10:01:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.780 10:01:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.780 10:01:01 -- paths/export.sh@5 -- # export PATH 00:14:02.780 10:01:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.780 10:01:01 -- nvmf/common.sh@46 -- # : 0 00:14:02.780 10:01:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:02.780 10:01:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:02.780 10:01:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:02.780 10:01:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.780 10:01:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.780 10:01:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:02.780 10:01:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:02.780 10:01:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:02.780 10:01:01 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:02.780 10:01:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:02.780 10:01:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.780 10:01:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:02.780 10:01:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:02.780 10:01:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:02.780 10:01:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.780 10:01:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.780 10:01:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.780 10:01:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:02.780 10:01:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:02.780 10:01:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:02.780 10:01:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:02.780 10:01:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:02.780 10:01:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:02.780 10:01:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.780 10:01:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.780 10:01:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:02.780 10:01:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:02.780 10:01:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:02.780 10:01:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:02.780 10:01:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:02.780 10:01:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.780 10:01:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:02.780 10:01:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:02.780 10:01:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:02.780 10:01:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:02.780 10:01:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:02.780 10:01:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:02.780 Cannot find device "nvmf_tgt_br" 00:14:02.780 10:01:01 -- nvmf/common.sh@154 -- # true 00:14:02.780 10:01:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:02.780 Cannot find device "nvmf_tgt_br2" 00:14:02.780 10:01:01 -- nvmf/common.sh@155 -- # true 00:14:02.780 10:01:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:02.780 10:01:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:02.780 Cannot find device "nvmf_tgt_br" 00:14:02.780 10:01:01 -- nvmf/common.sh@157 -- # true 00:14:02.780 10:01:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:02.780 Cannot find device "nvmf_tgt_br2" 00:14:02.780 10:01:01 -- nvmf/common.sh@158 -- # true 00:14:02.780 10:01:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:02.780 10:01:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:02.780 10:01:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:02.780 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:02.780 10:01:01 -- nvmf/common.sh@161 -- # true 00:14:02.780 10:01:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:02.780 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:02.780 10:01:01 -- nvmf/common.sh@162 -- # true 00:14:02.780 10:01:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:02.780 10:01:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:02.780 10:01:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:02.780 10:01:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:02.780 10:01:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:03.039 10:01:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:03.039 10:01:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:03.039 10:01:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:03.039 10:01:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:03.039 10:01:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:03.039 10:01:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:03.039 10:01:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:03.039 10:01:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:03.039 10:01:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:03.039 10:01:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:03.039 10:01:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:03.039 10:01:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:03.039 10:01:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:03.039 10:01:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:03.039 10:01:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:03.039 10:01:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:03.039 10:01:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:03.039 10:01:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:03.039 10:01:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:03.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:14:03.039 00:14:03.039 --- 10.0.0.2 ping statistics --- 00:14:03.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.039 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:03.039 10:01:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:03.039 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:03.039 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:14:03.039 00:14:03.039 --- 10.0.0.3 ping statistics --- 00:14:03.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.039 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:03.039 10:01:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:03.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:14:03.039 00:14:03.039 --- 10.0.0.1 ping statistics --- 00:14:03.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.039 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:03.039 10:01:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.039 10:01:01 -- nvmf/common.sh@421 -- # return 0 00:14:03.039 10:01:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:03.039 10:01:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.039 10:01:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:03.039 10:01:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:03.039 10:01:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.040 10:01:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:03.040 10:01:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:03.040 10:01:01 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:03.040 10:01:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:03.040 10:01:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:03.040 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:14:03.040 10:01:01 -- nvmf/common.sh@469 -- # nvmfpid=82461 00:14:03.040 10:01:01 -- nvmf/common.sh@470 -- # waitforlisten 82461 00:14:03.040 10:01:01 -- common/autotest_common.sh@829 -- # '[' -z 82461 ']' 00:14:03.040 10:01:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.040 10:01:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:03.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.040 10:01:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.040 10:01:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.040 10:01:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.040 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:14:03.040 [2024-12-16 10:01:01.627450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:03.040 [2024-12-16 10:01:01.628050] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.298 [2024-12-16 10:01:01.771085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:03.298 [2024-12-16 10:01:01.833821] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:03.298 [2024-12-16 10:01:01.833999] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.298 [2024-12-16 10:01:01.834014] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.298 [2024-12-16 10:01:01.834025] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.298 [2024-12-16 10:01:01.834156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.298 [2024-12-16 10:01:01.834564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.234 10:01:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.234 10:01:02 -- common/autotest_common.sh@862 -- # return 0 00:14:04.234 10:01:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:04.234 10:01:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:04.234 10:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:04.234 10:01:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.234 10:01:02 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:04.234 10:01:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.234 10:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:04.234 [2024-12-16 10:01:02.606797] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.234 10:01:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.234 10:01:02 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:04.234 10:01:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.234 10:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:04.234 10:01:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.234 10:01:02 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.234 10:01:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.234 10:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:04.234 [2024-12-16 10:01:02.623039] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.234 10:01:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.234 10:01:02 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:04.234 10:01:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.234 10:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:04.234 NULL1 00:14:04.234 10:01:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.234 10:01:02 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:04.234 10:01:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.234 10:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:04.234 Delay0 00:14:04.234 10:01:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.234 10:01:02 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.234 10:01:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.234 10:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:04.234 10:01:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.234 10:01:02 -- target/delete_subsystem.sh@28 -- # perf_pid=82512 00:14:04.234 10:01:02 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:04.234 10:01:02 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:04.234 [2024-12-16 10:01:02.817440] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:06.136 10:01:04 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.136 10:01:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.136 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 [2024-12-16 10:01:04.851794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317e70 is same with the state(5) to be set 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Write completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 starting I/O failed: -6 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 [2024-12-16 10:01:04.853214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.395 starting I/O failed: -6 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 Read completed with error (sct=0, sc=8) 00:14:06.395 [2024-12-16 10:01:04.853245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.395 [2024-12-16 10:01:04.853272] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.395 [2024-12-16 10:01:04.853280] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.395 [2024-12-16 10:01:04.853288] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.395 [2024-12-16 10:01:04.853296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.395 [2024-12-16 10:01:04.853304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.395 [2024-12-16 10:01:04.853312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.395 [2024-12-16 10:01:04.853320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.396 [2024-12-16 10:01:04.853327] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.396 [2024-12-16 10:01:04.853335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.396 [2024-12-16 10:01:04.853343] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.396 [2024-12-16 10:01:04.853351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.396 [2024-12-16 10:01:04.853359] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.396 [2024-12-16 10:01:04.853377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.396 [2024-12-16 10:01:04.853388] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.396 starting I/O failed: -6 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853420] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853429] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853437] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.396 starting I/O failed: -6 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229a90 is same with the state(5) to be set 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f152c000c00 is same with the state(5) to be set 00:14:06.396 [2024-12-16 10:01:04.853482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229130 is same with the state(5) to be set 00:14:06.396 [2024-12-16 10:01:04.853493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229130 is same with the state(5) to be set 00:14:06.396 [2024-12-16 10:01:04.853501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229130 is same with the state(5) to be set 00:14:06.396 [2024-12-16 10:01:04.853509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229130 is same with the state(5) to be set 00:14:06.396 [2024-12-16 10:01:04.853517] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229130 is same with the state(5) to be set 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229130 is same with the state(5) to be set 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229130 is same with starting I/O failed: -6 00:14:06.396 the state(5) to be set 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229130 is same with the state(5) to be set 00:14:06.396 starting I/O failed: -6 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229130 is same with the state(5) to be set 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229130 is same with starting I/O failed: -6 00:14:06.396 the state(5) to be set 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229130 is same with the state(5) to be set 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 [2024-12-16 10:01:04.853582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229130 is same with the state(5) to be set 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853590] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229130 is same with the state(5) to be set 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 [2024-12-16 10:01:04.853598] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229130 is same with the state(5) to be set 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1229130 is same with the state(5) to be set 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 starting I/O failed: -6 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 [2024-12-16 10:01:04.853912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f152c00c350 is same with the state(5) to be set 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.396 Write completed with error (sct=0, sc=8) 00:14:06.396 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Write completed with error (sct=0, sc=8) 00:14:06.397 Write completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Write completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Write completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Write completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Write completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Read completed with error (sct=0, sc=8) 00:14:06.397 Write completed with error (sct=0, sc=8) 00:14:06.397 Write completed with error (sct=0, sc=8) 00:14:06.397 [2024-12-16 10:01:04.854414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f152c00bf20 is same with the state(5) to be set 00:14:07.336 [2024-12-16 10:01:05.831016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2316070 is same with the state(5) to be set 00:14:07.336 Write completed with error (sct=0, sc=8) 00:14:07.336 Write completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Write completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Write completed with error (sct=0, sc=8) 00:14:07.336 Write completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Write completed with error (sct=0, sc=8) 00:14:07.336 Write completed with error (sct=0, sc=8) 00:14:07.336 Write completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Write completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 [2024-12-16 10:01:05.853770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2317bc0 is same with the state(5) to be set 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Write completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Write completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Write completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Write completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Write completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 Read completed with error (sct=0, sc=8) 00:14:07.336 [2024-12-16 10:01:05.853941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318120 is same with the state(5) to be set 00:14:07.336 [2024-12-16 10:01:05.854501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2316070 (9): Bad file descriptor 00:14:07.336 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:07.336 10:01:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.336 10:01:05 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:07.336 10:01:05 -- target/delete_subsystem.sh@35 -- # kill -0 82512 00:14:07.336 10:01:05 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:07.336 Initializing NVMe Controllers 00:14:07.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:07.336 Controller IO queue size 128, less than required. 00:14:07.336 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:07.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:07.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:07.336 Initialization complete. Launching workers. 00:14:07.336 ======================================================== 00:14:07.336 Latency(us) 00:14:07.336 Device Information : IOPS MiB/s Average min max 00:14:07.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.43 0.08 898788.41 303.65 1010635.14 00:14:07.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.49 0.08 803153.62 285.64 1012773.27 00:14:07.336 ======================================================== 00:14:07.336 Total : 327.93 0.16 852275.13 285.64 1012773.27 00:14:07.336 00:14:07.902 10:01:06 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:07.902 10:01:06 -- target/delete_subsystem.sh@35 -- # kill -0 82512 00:14:07.902 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82512) - No such process 00:14:07.902 10:01:06 -- target/delete_subsystem.sh@45 -- # NOT wait 82512 00:14:07.902 10:01:06 -- common/autotest_common.sh@650 -- # local es=0 00:14:07.902 10:01:06 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 82512 00:14:07.902 10:01:06 -- common/autotest_common.sh@638 -- # local arg=wait 00:14:07.902 10:01:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.902 10:01:06 -- common/autotest_common.sh@642 -- # type -t wait 00:14:07.902 10:01:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.902 10:01:06 -- common/autotest_common.sh@653 -- # wait 82512 00:14:07.902 10:01:06 -- common/autotest_common.sh@653 -- # es=1 00:14:07.902 10:01:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:07.902 10:01:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:07.902 10:01:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:07.902 10:01:06 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:07.902 10:01:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.902 10:01:06 -- common/autotest_common.sh@10 -- # set +x 00:14:07.902 10:01:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.902 10:01:06 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.902 10:01:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.902 10:01:06 -- common/autotest_common.sh@10 -- # set +x 00:14:07.902 [2024-12-16 10:01:06.379323] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.902 10:01:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.902 10:01:06 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.902 10:01:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.902 10:01:06 -- common/autotest_common.sh@10 -- # set +x 00:14:07.902 10:01:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.902 10:01:06 -- target/delete_subsystem.sh@54 -- # perf_pid=82558 00:14:07.902 10:01:06 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:07.902 10:01:06 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:07.902 10:01:06 -- target/delete_subsystem.sh@57 -- # kill -0 82558 00:14:07.902 10:01:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:08.160 [2024-12-16 10:01:06.559001] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:08.418 10:01:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:08.418 10:01:06 -- target/delete_subsystem.sh@57 -- # kill -0 82558 00:14:08.418 10:01:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:08.984 10:01:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:08.984 10:01:07 -- target/delete_subsystem.sh@57 -- # kill -0 82558 00:14:08.984 10:01:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:09.550 10:01:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:09.550 10:01:07 -- target/delete_subsystem.sh@57 -- # kill -0 82558 00:14:09.550 10:01:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:09.808 10:01:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:09.808 10:01:08 -- target/delete_subsystem.sh@57 -- # kill -0 82558 00:14:09.808 10:01:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:10.374 10:01:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:10.374 10:01:08 -- target/delete_subsystem.sh@57 -- # kill -0 82558 00:14:10.374 10:01:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:10.940 10:01:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:10.940 10:01:09 -- target/delete_subsystem.sh@57 -- # kill -0 82558 00:14:10.940 10:01:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:11.198 Initializing NVMe Controllers 00:14:11.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:11.198 Controller IO queue size 128, less than required. 00:14:11.198 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:11.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:11.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:11.198 Initialization complete. Launching workers. 00:14:11.198 ======================================================== 00:14:11.198 Latency(us) 00:14:11.198 Device Information : IOPS MiB/s Average min max 00:14:11.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002640.24 1000117.31 1043958.29 00:14:11.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004482.27 1000184.24 1010896.82 00:14:11.198 ======================================================== 00:14:11.198 Total : 256.00 0.12 1003561.26 1000117.31 1043958.29 00:14:11.198 00:14:11.455 10:01:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:11.455 10:01:09 -- target/delete_subsystem.sh@57 -- # kill -0 82558 00:14:11.455 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82558) - No such process 00:14:11.455 10:01:09 -- target/delete_subsystem.sh@67 -- # wait 82558 00:14:11.455 10:01:09 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:11.455 10:01:09 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:11.455 10:01:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:11.455 10:01:09 -- nvmf/common.sh@116 -- # sync 00:14:11.455 10:01:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:11.455 10:01:09 -- nvmf/common.sh@119 -- # set +e 00:14:11.455 10:01:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:11.455 10:01:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:11.455 rmmod nvme_tcp 00:14:11.455 rmmod nvme_fabrics 00:14:11.455 rmmod nvme_keyring 00:14:11.455 10:01:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:11.455 10:01:10 -- nvmf/common.sh@123 -- # set -e 00:14:11.455 10:01:10 -- nvmf/common.sh@124 -- # return 0 00:14:11.455 10:01:10 -- nvmf/common.sh@477 -- # '[' -n 82461 ']' 00:14:11.455 10:01:10 -- nvmf/common.sh@478 -- # killprocess 82461 00:14:11.455 10:01:10 -- common/autotest_common.sh@936 -- # '[' -z 82461 ']' 00:14:11.455 10:01:10 -- common/autotest_common.sh@940 -- # kill -0 82461 00:14:11.455 10:01:10 -- common/autotest_common.sh@941 -- # uname 00:14:11.456 10:01:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:11.456 10:01:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82461 00:14:11.713 10:01:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:11.713 killing process with pid 82461 00:14:11.713 10:01:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:11.713 10:01:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82461' 00:14:11.713 10:01:10 -- common/autotest_common.sh@955 -- # kill 82461 00:14:11.713 10:01:10 -- common/autotest_common.sh@960 -- # wait 82461 00:14:11.714 10:01:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:11.714 10:01:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:11.714 10:01:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:11.714 10:01:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:11.714 10:01:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:11.714 10:01:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.714 10:01:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.714 10:01:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.714 10:01:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:11.714 ************************************ 00:14:11.714 END TEST nvmf_delete_subsystem 00:14:11.714 ************************************ 00:14:11.714 00:14:11.714 real 0m9.279s 00:14:11.714 user 0m27.632s 00:14:11.714 sys 0m1.533s 00:14:11.714 10:01:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:11.714 10:01:10 -- common/autotest_common.sh@10 -- # set +x 00:14:11.972 10:01:10 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:14:11.972 10:01:10 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:11.972 10:01:10 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:11.972 10:01:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:11.972 10:01:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:11.972 10:01:10 -- common/autotest_common.sh@10 -- # set +x 00:14:11.972 ************************************ 00:14:11.972 START TEST nvmf_host_management 00:14:11.972 ************************************ 00:14:11.972 10:01:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:11.972 * Looking for test storage... 00:14:11.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:11.972 10:01:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:11.972 10:01:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:11.972 10:01:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:11.972 10:01:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:11.972 10:01:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:11.972 10:01:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:11.972 10:01:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:11.972 10:01:10 -- scripts/common.sh@335 -- # IFS=.-: 00:14:11.972 10:01:10 -- scripts/common.sh@335 -- # read -ra ver1 00:14:11.972 10:01:10 -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.972 10:01:10 -- scripts/common.sh@336 -- # read -ra ver2 00:14:11.972 10:01:10 -- scripts/common.sh@337 -- # local 'op=<' 00:14:11.972 10:01:10 -- scripts/common.sh@339 -- # ver1_l=2 00:14:11.972 10:01:10 -- scripts/common.sh@340 -- # ver2_l=1 00:14:11.972 10:01:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:11.972 10:01:10 -- scripts/common.sh@343 -- # case "$op" in 00:14:11.972 10:01:10 -- scripts/common.sh@344 -- # : 1 00:14:11.972 10:01:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:11.972 10:01:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.972 10:01:10 -- scripts/common.sh@364 -- # decimal 1 00:14:11.972 10:01:10 -- scripts/common.sh@352 -- # local d=1 00:14:11.972 10:01:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.972 10:01:10 -- scripts/common.sh@354 -- # echo 1 00:14:11.972 10:01:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:11.972 10:01:10 -- scripts/common.sh@365 -- # decimal 2 00:14:11.972 10:01:10 -- scripts/common.sh@352 -- # local d=2 00:14:11.972 10:01:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.972 10:01:10 -- scripts/common.sh@354 -- # echo 2 00:14:11.972 10:01:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:11.972 10:01:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:11.972 10:01:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:11.972 10:01:10 -- scripts/common.sh@367 -- # return 0 00:14:11.972 10:01:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.972 10:01:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:11.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.972 --rc genhtml_branch_coverage=1 00:14:11.972 --rc genhtml_function_coverage=1 00:14:11.972 --rc genhtml_legend=1 00:14:11.972 --rc geninfo_all_blocks=1 00:14:11.972 --rc geninfo_unexecuted_blocks=1 00:14:11.972 00:14:11.972 ' 00:14:11.972 10:01:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:11.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.972 --rc genhtml_branch_coverage=1 00:14:11.972 --rc genhtml_function_coverage=1 00:14:11.972 --rc genhtml_legend=1 00:14:11.972 --rc geninfo_all_blocks=1 00:14:11.972 --rc geninfo_unexecuted_blocks=1 00:14:11.972 00:14:11.972 ' 00:14:11.972 10:01:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:11.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.972 --rc genhtml_branch_coverage=1 00:14:11.972 --rc genhtml_function_coverage=1 00:14:11.972 --rc genhtml_legend=1 00:14:11.972 --rc geninfo_all_blocks=1 00:14:11.972 --rc geninfo_unexecuted_blocks=1 00:14:11.972 00:14:11.972 ' 00:14:11.972 10:01:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:11.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.972 --rc genhtml_branch_coverage=1 00:14:11.972 --rc genhtml_function_coverage=1 00:14:11.972 --rc genhtml_legend=1 00:14:11.972 --rc geninfo_all_blocks=1 00:14:11.973 --rc geninfo_unexecuted_blocks=1 00:14:11.973 00:14:11.973 ' 00:14:11.973 10:01:10 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:11.973 10:01:10 -- nvmf/common.sh@7 -- # uname -s 00:14:11.973 10:01:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.973 10:01:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.973 10:01:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.973 10:01:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.973 10:01:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.973 10:01:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.973 10:01:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.973 10:01:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.973 10:01:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.973 10:01:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.973 10:01:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:14:11.973 10:01:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:14:11.973 10:01:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.973 10:01:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.973 10:01:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:11.973 10:01:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:11.973 10:01:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.973 10:01:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.973 10:01:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.973 10:01:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.973 10:01:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.973 10:01:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.973 10:01:10 -- paths/export.sh@5 -- # export PATH 00:14:11.973 10:01:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.973 10:01:10 -- nvmf/common.sh@46 -- # : 0 00:14:11.973 10:01:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:11.973 10:01:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:11.973 10:01:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:11.973 10:01:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.973 10:01:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.973 10:01:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:11.973 10:01:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:11.973 10:01:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:11.973 10:01:10 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:11.973 10:01:10 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:11.973 10:01:10 -- target/host_management.sh@104 -- # nvmftestinit 00:14:11.973 10:01:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:11.973 10:01:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.973 10:01:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:11.973 10:01:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:11.973 10:01:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:11.973 10:01:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.973 10:01:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.973 10:01:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.973 10:01:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:11.973 10:01:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:11.973 10:01:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:11.973 10:01:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:11.973 10:01:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:11.973 10:01:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:11.973 10:01:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.973 10:01:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.973 10:01:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:11.973 10:01:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:11.973 10:01:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:11.973 10:01:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:11.973 10:01:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:11.973 10:01:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.973 10:01:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:11.973 10:01:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:11.973 10:01:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:11.973 10:01:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:11.973 10:01:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:11.973 10:01:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:11.973 Cannot find device "nvmf_tgt_br" 00:14:11.973 10:01:10 -- nvmf/common.sh@154 -- # true 00:14:11.973 10:01:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:12.231 Cannot find device "nvmf_tgt_br2" 00:14:12.231 10:01:10 -- nvmf/common.sh@155 -- # true 00:14:12.231 10:01:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:12.231 10:01:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:12.231 Cannot find device "nvmf_tgt_br" 00:14:12.231 10:01:10 -- nvmf/common.sh@157 -- # true 00:14:12.231 10:01:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:12.231 Cannot find device "nvmf_tgt_br2" 00:14:12.231 10:01:10 -- nvmf/common.sh@158 -- # true 00:14:12.231 10:01:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:12.231 10:01:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:12.231 10:01:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:12.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.231 10:01:10 -- nvmf/common.sh@161 -- # true 00:14:12.232 10:01:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:12.232 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.232 10:01:10 -- nvmf/common.sh@162 -- # true 00:14:12.232 10:01:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:12.232 10:01:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:12.232 10:01:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:12.232 10:01:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:12.232 10:01:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:12.232 10:01:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:12.232 10:01:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:12.232 10:01:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:12.232 10:01:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:12.232 10:01:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:12.232 10:01:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:12.232 10:01:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:12.232 10:01:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:12.232 10:01:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:12.232 10:01:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:12.232 10:01:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:12.232 10:01:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:12.232 10:01:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:12.232 10:01:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:12.232 10:01:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:12.232 10:01:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:12.232 10:01:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:12.490 10:01:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:12.490 10:01:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:12.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:14:12.490 00:14:12.490 --- 10.0.0.2 ping statistics --- 00:14:12.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.490 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:12.490 10:01:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:12.490 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:12.490 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:12.490 00:14:12.490 --- 10.0.0.3 ping statistics --- 00:14:12.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.490 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:12.490 10:01:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:12.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:12.490 00:14:12.490 --- 10.0.0.1 ping statistics --- 00:14:12.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.490 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:12.490 10:01:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.490 10:01:10 -- nvmf/common.sh@421 -- # return 0 00:14:12.490 10:01:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:12.490 10:01:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.490 10:01:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:12.490 10:01:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:12.490 10:01:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.490 10:01:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:12.490 10:01:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:12.490 10:01:10 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:12.490 10:01:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:12.490 10:01:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:12.490 10:01:10 -- common/autotest_common.sh@10 -- # set +x 00:14:12.490 ************************************ 00:14:12.490 START TEST nvmf_host_management 00:14:12.490 ************************************ 00:14:12.490 10:01:10 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:14:12.490 10:01:10 -- target/host_management.sh@69 -- # starttarget 00:14:12.490 10:01:10 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:12.490 10:01:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:12.490 10:01:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:12.490 10:01:10 -- common/autotest_common.sh@10 -- # set +x 00:14:12.490 10:01:10 -- nvmf/common.sh@469 -- # nvmfpid=82796 00:14:12.490 10:01:10 -- nvmf/common.sh@470 -- # waitforlisten 82796 00:14:12.490 10:01:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:12.490 10:01:10 -- common/autotest_common.sh@829 -- # '[' -z 82796 ']' 00:14:12.490 10:01:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.490 10:01:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.490 10:01:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.490 10:01:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.490 10:01:10 -- common/autotest_common.sh@10 -- # set +x 00:14:12.490 [2024-12-16 10:01:10.966792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:12.490 [2024-12-16 10:01:10.966908] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.490 [2024-12-16 10:01:11.107288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.749 [2024-12-16 10:01:11.162993] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:12.749 [2024-12-16 10:01:11.163300] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.749 [2024-12-16 10:01:11.163435] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.749 [2024-12-16 10:01:11.163540] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.749 [2024-12-16 10:01:11.163770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.749 [2024-12-16 10:01:11.164261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.749 [2024-12-16 10:01:11.164408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.749 [2024-12-16 10:01:11.164406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:13.314 10:01:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.314 10:01:11 -- common/autotest_common.sh@862 -- # return 0 00:14:13.314 10:01:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:13.314 10:01:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.314 10:01:11 -- common/autotest_common.sh@10 -- # set +x 00:14:13.314 10:01:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.314 10:01:11 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:13.314 10:01:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.314 10:01:11 -- common/autotest_common.sh@10 -- # set +x 00:14:13.314 [2024-12-16 10:01:11.887812] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.314 10:01:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.314 10:01:11 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:13.314 10:01:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.314 10:01:11 -- common/autotest_common.sh@10 -- # set +x 00:14:13.314 10:01:11 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:13.314 10:01:11 -- target/host_management.sh@23 -- # cat 00:14:13.314 10:01:11 -- target/host_management.sh@30 -- # rpc_cmd 00:14:13.314 10:01:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.314 10:01:11 -- common/autotest_common.sh@10 -- # set +x 00:14:13.573 Malloc0 00:14:13.573 [2024-12-16 10:01:11.971281] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.573 10:01:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.573 10:01:11 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:13.573 10:01:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.573 10:01:11 -- common/autotest_common.sh@10 -- # set +x 00:14:13.573 10:01:12 -- target/host_management.sh@73 -- # perfpid=82868 00:14:13.573 10:01:12 -- target/host_management.sh@74 -- # waitforlisten 82868 /var/tmp/bdevperf.sock 00:14:13.573 10:01:12 -- common/autotest_common.sh@829 -- # '[' -z 82868 ']' 00:14:13.573 10:01:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:13.573 10:01:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:13.573 10:01:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:13.573 10:01:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.573 10:01:12 -- common/autotest_common.sh@10 -- # set +x 00:14:13.573 10:01:12 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:13.573 10:01:12 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:13.573 10:01:12 -- nvmf/common.sh@520 -- # config=() 00:14:13.573 10:01:12 -- nvmf/common.sh@520 -- # local subsystem config 00:14:13.573 10:01:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:13.573 10:01:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:13.573 { 00:14:13.573 "params": { 00:14:13.573 "name": "Nvme$subsystem", 00:14:13.573 "trtype": "$TEST_TRANSPORT", 00:14:13.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:13.573 "adrfam": "ipv4", 00:14:13.573 "trsvcid": "$NVMF_PORT", 00:14:13.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:13.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:13.573 "hdgst": ${hdgst:-false}, 00:14:13.573 "ddgst": ${ddgst:-false} 00:14:13.573 }, 00:14:13.573 "method": "bdev_nvme_attach_controller" 00:14:13.573 } 00:14:13.573 EOF 00:14:13.573 )") 00:14:13.573 10:01:12 -- nvmf/common.sh@542 -- # cat 00:14:13.573 10:01:12 -- nvmf/common.sh@544 -- # jq . 00:14:13.573 10:01:12 -- nvmf/common.sh@545 -- # IFS=, 00:14:13.573 10:01:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:13.573 "params": { 00:14:13.573 "name": "Nvme0", 00:14:13.573 "trtype": "tcp", 00:14:13.573 "traddr": "10.0.0.2", 00:14:13.573 "adrfam": "ipv4", 00:14:13.573 "trsvcid": "4420", 00:14:13.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:13.573 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:13.573 "hdgst": false, 00:14:13.573 "ddgst": false 00:14:13.573 }, 00:14:13.573 "method": "bdev_nvme_attach_controller" 00:14:13.573 }' 00:14:13.573 [2024-12-16 10:01:12.070452] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:13.573 [2024-12-16 10:01:12.070514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82868 ] 00:14:13.832 [2024-12-16 10:01:12.202285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.832 [2024-12-16 10:01:12.265272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.832 Running I/O for 10 seconds... 00:14:14.768 10:01:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.768 10:01:13 -- common/autotest_common.sh@862 -- # return 0 00:14:14.768 10:01:13 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:14.768 10:01:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.768 10:01:13 -- common/autotest_common.sh@10 -- # set +x 00:14:14.768 10:01:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.768 10:01:13 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:14.768 10:01:13 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:14.768 10:01:13 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:14.768 10:01:13 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:14.768 10:01:13 -- target/host_management.sh@52 -- # local ret=1 00:14:14.768 10:01:13 -- target/host_management.sh@53 -- # local i 00:14:14.768 10:01:13 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:14.768 10:01:13 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:14.768 10:01:13 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:14.768 10:01:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.768 10:01:13 -- common/autotest_common.sh@10 -- # set +x 00:14:14.768 10:01:13 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:14.768 10:01:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.768 10:01:13 -- target/host_management.sh@55 -- # read_io_count=2510 00:14:14.768 10:01:13 -- target/host_management.sh@58 -- # '[' 2510 -ge 100 ']' 00:14:14.768 10:01:13 -- target/host_management.sh@59 -- # ret=0 00:14:14.768 10:01:13 -- target/host_management.sh@60 -- # break 00:14:14.768 10:01:13 -- target/host_management.sh@64 -- # return 0 00:14:14.768 10:01:13 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:14.768 10:01:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.768 10:01:13 -- common/autotest_common.sh@10 -- # set +x 00:14:14.768 [2024-12-16 10:01:13.167442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.768 [2024-12-16 10:01:13.167481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.768 [2024-12-16 10:01:13.167495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.768 [2024-12-16 10:01:13.167505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.768 [2024-12-16 10:01:13.167515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.768 [2024-12-16 10:01:13.167524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.768 [2024-12-16 10:01:13.167534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.768 [2024-12-16 10:01:13.167543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.768 [2024-12-16 10:01:13.167553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109a70 is same with the state(5) to be set 00:14:14.768 [2024-12-16 10:01:13.168127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.768 [2024-12-16 10:01:13.168146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.768 [2024-12-16 10:01:13.168163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.768 [2024-12-16 10:01:13.168173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.768 [2024-12-16 10:01:13.168184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.768 [2024-12-16 10:01:13.168194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.768 [2024-12-16 10:01:13.168204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.768 [2024-12-16 10:01:13.168229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.768 [2024-12-16 10:01:13.168239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.768 [2024-12-16 10:01:13.168248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.768 [2024-12-16 10:01:13.168259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.768 [2024-12-16 10:01:13.168268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.768 [2024-12-16 10:01:13.168279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.768 [2024-12-16 10:01:13.168288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.768 [2024-12-16 10:01:13.168298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.768 [2024-12-16 10:01:13.168308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.768 [2024-12-16 10:01:13.168505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.168979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.168988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 10:01:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.769 [2024-12-16 10:01:13.168999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 10:01:13 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:14.769 [2024-12-16 10:01:13.169236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.769 [2024-12-16 10:01:13.169335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.769 [2024-12-16 10:01:13.169344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 [2024-12-16 10:01:13.169382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 [2024-12-16 10:01:13.169402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 [2024-12-16 10:01:13.169428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 [2024-12-16 10:01:13.169448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 [2024-12-16 10:01:13.169468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 10:01:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.770 [2024-12-16 10:01:13.169480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 [2024-12-16 10:01:13.169489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 [2024-12-16 10:01:13.169509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 [2024-12-16 10:01:13.169530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 [2024-12-16 10:01:13.169550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 [2024-12-16 10:01:13.169571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 [2024-12-16 10:01:13.169591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 [2024-12-16 10:01:13.169611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 [2024-12-16 10:01:13.169636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 10:01:13 -- common/autotest_common.sh@10 -- # set +x 00:14:14.770 [2024-12-16 10:01:13.169658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 [2024-12-16 10:01:13.169678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.770 [2024-12-16 10:01:13.169698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.770 [2024-12-16 10:01:13.169801] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21addc0 was disconnected and freed. reset controller. 00:14:14.770 [2024-12-16 10:01:13.171805] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:14.770 task offset: 84480 on job bdev=Nvme0n1 fails 00:14:14.770 00:14:14.770 Latency(us) 00:14:14.770 [2024-12-16T10:01:13.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.770 [2024-12-16T10:01:13.395Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:14.770 [2024-12-16T10:01:13.395Z] Job: Nvme0n1 ended in about 0.74 seconds with error 00:14:14.770 Verification LBA range: start 0x0 length 0x400 00:14:14.770 Nvme0n1 : 0.74 3656.33 228.52 86.80 0.00 16825.06 2815.07 22401.40 00:14:14.770 [2024-12-16T10:01:13.395Z] =================================================================================================================== 00:14:14.770 [2024-12-16T10:01:13.395Z] Total : 3656.33 228.52 86.80 0.00 16825.06 2815.07 22401.40 00:14:14.770 [2024-12-16 10:01:13.173731] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:14.770 [2024-12-16 10:01:13.173754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2109a70 (9): Bad file descriptor 00:14:14.770 10:01:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.770 10:01:13 -- target/host_management.sh@87 -- # sleep 1 00:14:14.770 [2024-12-16 10:01:13.183670] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:15.705 10:01:14 -- target/host_management.sh@91 -- # kill -9 82868 00:14:15.705 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82868) - No such process 00:14:15.705 10:01:14 -- target/host_management.sh@91 -- # true 00:14:15.705 10:01:14 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:15.705 10:01:14 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:15.705 10:01:14 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:15.705 10:01:14 -- nvmf/common.sh@520 -- # config=() 00:14:15.705 10:01:14 -- nvmf/common.sh@520 -- # local subsystem config 00:14:15.705 10:01:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:15.705 10:01:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:15.705 { 00:14:15.705 "params": { 00:14:15.705 "name": "Nvme$subsystem", 00:14:15.705 "trtype": "$TEST_TRANSPORT", 00:14:15.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:15.705 "adrfam": "ipv4", 00:14:15.705 "trsvcid": "$NVMF_PORT", 00:14:15.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:15.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:15.705 "hdgst": ${hdgst:-false}, 00:14:15.705 "ddgst": ${ddgst:-false} 00:14:15.705 }, 00:14:15.705 "method": "bdev_nvme_attach_controller" 00:14:15.705 } 00:14:15.705 EOF 00:14:15.705 )") 00:14:15.705 10:01:14 -- nvmf/common.sh@542 -- # cat 00:14:15.705 10:01:14 -- nvmf/common.sh@544 -- # jq . 00:14:15.705 10:01:14 -- nvmf/common.sh@545 -- # IFS=, 00:14:15.705 10:01:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:15.705 "params": { 00:14:15.705 "name": "Nvme0", 00:14:15.705 "trtype": "tcp", 00:14:15.705 "traddr": "10.0.0.2", 00:14:15.705 "adrfam": "ipv4", 00:14:15.705 "trsvcid": "4420", 00:14:15.705 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:15.705 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:15.705 "hdgst": false, 00:14:15.705 "ddgst": false 00:14:15.705 }, 00:14:15.705 "method": "bdev_nvme_attach_controller" 00:14:15.705 }' 00:14:15.705 [2024-12-16 10:01:14.241420] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:15.705 [2024-12-16 10:01:14.241518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82918 ] 00:14:15.963 [2024-12-16 10:01:14.381626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.963 [2024-12-16 10:01:14.434357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.221 Running I/O for 1 seconds... 00:14:17.154 00:14:17.154 Latency(us) 00:14:17.154 [2024-12-16T10:01:15.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.154 [2024-12-16T10:01:15.779Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:17.154 Verification LBA range: start 0x0 length 0x400 00:14:17.154 Nvme0n1 : 1.01 3828.36 239.27 0.00 0.00 16422.34 1608.61 22043.93 00:14:17.154 [2024-12-16T10:01:15.779Z] =================================================================================================================== 00:14:17.154 [2024-12-16T10:01:15.779Z] Total : 3828.36 239.27 0.00 0.00 16422.34 1608.61 22043.93 00:14:17.412 10:01:15 -- target/host_management.sh@101 -- # stoptarget 00:14:17.412 10:01:15 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:17.412 10:01:15 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:17.412 10:01:15 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:17.412 10:01:15 -- target/host_management.sh@40 -- # nvmftestfini 00:14:17.412 10:01:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:17.412 10:01:15 -- nvmf/common.sh@116 -- # sync 00:14:17.412 10:01:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:17.412 10:01:15 -- nvmf/common.sh@119 -- # set +e 00:14:17.412 10:01:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:17.412 10:01:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:17.412 rmmod nvme_tcp 00:14:17.412 rmmod nvme_fabrics 00:14:17.412 rmmod nvme_keyring 00:14:17.412 10:01:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:17.412 10:01:15 -- nvmf/common.sh@123 -- # set -e 00:14:17.412 10:01:15 -- nvmf/common.sh@124 -- # return 0 00:14:17.412 10:01:15 -- nvmf/common.sh@477 -- # '[' -n 82796 ']' 00:14:17.412 10:01:15 -- nvmf/common.sh@478 -- # killprocess 82796 00:14:17.412 10:01:15 -- common/autotest_common.sh@936 -- # '[' -z 82796 ']' 00:14:17.412 10:01:15 -- common/autotest_common.sh@940 -- # kill -0 82796 00:14:17.412 10:01:15 -- common/autotest_common.sh@941 -- # uname 00:14:17.412 10:01:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:17.412 10:01:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82796 00:14:17.412 killing process with pid 82796 00:14:17.412 10:01:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:17.412 10:01:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:17.412 10:01:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82796' 00:14:17.412 10:01:15 -- common/autotest_common.sh@955 -- # kill 82796 00:14:17.412 10:01:15 -- common/autotest_common.sh@960 -- # wait 82796 00:14:17.670 [2024-12-16 10:01:16.145997] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:17.670 10:01:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:17.670 10:01:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:17.670 10:01:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:17.670 10:01:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.670 10:01:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:17.670 10:01:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.670 10:01:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.670 10:01:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.670 10:01:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:17.670 00:14:17.670 real 0m5.298s 00:14:17.670 user 0m22.239s 00:14:17.670 sys 0m1.301s 00:14:17.670 ************************************ 00:14:17.670 END TEST nvmf_host_management 00:14:17.670 ************************************ 00:14:17.670 10:01:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:17.670 10:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:17.670 10:01:16 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:17.670 00:14:17.670 real 0m5.886s 00:14:17.670 user 0m22.407s 00:14:17.670 sys 0m1.570s 00:14:17.670 ************************************ 00:14:17.670 END TEST nvmf_host_management 00:14:17.670 ************************************ 00:14:17.670 10:01:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:17.670 10:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:17.670 10:01:16 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:17.670 10:01:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:17.670 10:01:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:17.671 10:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:17.671 ************************************ 00:14:17.671 START TEST nvmf_lvol 00:14:17.671 ************************************ 00:14:17.671 10:01:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:17.929 * Looking for test storage... 00:14:17.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:17.929 10:01:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:17.929 10:01:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:17.929 10:01:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:17.929 10:01:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:17.929 10:01:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:17.929 10:01:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:17.929 10:01:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:17.929 10:01:16 -- scripts/common.sh@335 -- # IFS=.-: 00:14:17.929 10:01:16 -- scripts/common.sh@335 -- # read -ra ver1 00:14:17.929 10:01:16 -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.929 10:01:16 -- scripts/common.sh@336 -- # read -ra ver2 00:14:17.929 10:01:16 -- scripts/common.sh@337 -- # local 'op=<' 00:14:17.929 10:01:16 -- scripts/common.sh@339 -- # ver1_l=2 00:14:17.929 10:01:16 -- scripts/common.sh@340 -- # ver2_l=1 00:14:17.929 10:01:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:17.929 10:01:16 -- scripts/common.sh@343 -- # case "$op" in 00:14:17.929 10:01:16 -- scripts/common.sh@344 -- # : 1 00:14:17.929 10:01:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:17.929 10:01:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.929 10:01:16 -- scripts/common.sh@364 -- # decimal 1 00:14:17.929 10:01:16 -- scripts/common.sh@352 -- # local d=1 00:14:17.929 10:01:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.929 10:01:16 -- scripts/common.sh@354 -- # echo 1 00:14:17.929 10:01:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:17.929 10:01:16 -- scripts/common.sh@365 -- # decimal 2 00:14:17.929 10:01:16 -- scripts/common.sh@352 -- # local d=2 00:14:17.929 10:01:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.929 10:01:16 -- scripts/common.sh@354 -- # echo 2 00:14:17.929 10:01:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:17.929 10:01:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:17.929 10:01:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:17.929 10:01:16 -- scripts/common.sh@367 -- # return 0 00:14:17.929 10:01:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.929 10:01:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:17.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.929 --rc genhtml_branch_coverage=1 00:14:17.929 --rc genhtml_function_coverage=1 00:14:17.929 --rc genhtml_legend=1 00:14:17.929 --rc geninfo_all_blocks=1 00:14:17.929 --rc geninfo_unexecuted_blocks=1 00:14:17.929 00:14:17.929 ' 00:14:17.929 10:01:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:17.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.929 --rc genhtml_branch_coverage=1 00:14:17.929 --rc genhtml_function_coverage=1 00:14:17.929 --rc genhtml_legend=1 00:14:17.929 --rc geninfo_all_blocks=1 00:14:17.929 --rc geninfo_unexecuted_blocks=1 00:14:17.929 00:14:17.929 ' 00:14:17.929 10:01:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:17.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.929 --rc genhtml_branch_coverage=1 00:14:17.929 --rc genhtml_function_coverage=1 00:14:17.930 --rc genhtml_legend=1 00:14:17.930 --rc geninfo_all_blocks=1 00:14:17.930 --rc geninfo_unexecuted_blocks=1 00:14:17.930 00:14:17.930 ' 00:14:17.930 10:01:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:17.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.930 --rc genhtml_branch_coverage=1 00:14:17.930 --rc genhtml_function_coverage=1 00:14:17.930 --rc genhtml_legend=1 00:14:17.930 --rc geninfo_all_blocks=1 00:14:17.930 --rc geninfo_unexecuted_blocks=1 00:14:17.930 00:14:17.930 ' 00:14:17.930 10:01:16 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:17.930 10:01:16 -- nvmf/common.sh@7 -- # uname -s 00:14:17.930 10:01:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.930 10:01:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.930 10:01:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.930 10:01:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.930 10:01:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.930 10:01:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.930 10:01:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.930 10:01:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.930 10:01:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.930 10:01:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.930 10:01:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:14:17.930 10:01:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:14:17.930 10:01:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.930 10:01:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.930 10:01:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:17.930 10:01:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:17.930 10:01:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.930 10:01:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.930 10:01:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.930 10:01:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.930 10:01:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.930 10:01:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.930 10:01:16 -- paths/export.sh@5 -- # export PATH 00:14:17.930 10:01:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.930 10:01:16 -- nvmf/common.sh@46 -- # : 0 00:14:17.930 10:01:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:17.930 10:01:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:17.930 10:01:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:17.930 10:01:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.930 10:01:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.930 10:01:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:17.930 10:01:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:17.930 10:01:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:17.930 10:01:16 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:17.930 10:01:16 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:17.930 10:01:16 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:17.930 10:01:16 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:17.930 10:01:16 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.930 10:01:16 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:17.930 10:01:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:17.930 10:01:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.930 10:01:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:17.930 10:01:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:17.930 10:01:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:17.930 10:01:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.930 10:01:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.930 10:01:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.930 10:01:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:17.930 10:01:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:17.930 10:01:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:17.930 10:01:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:17.930 10:01:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:17.930 10:01:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:17.930 10:01:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.930 10:01:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.930 10:01:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:17.930 10:01:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:17.930 10:01:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:17.930 10:01:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:17.930 10:01:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:17.930 10:01:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.930 10:01:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:17.930 10:01:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:17.930 10:01:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:17.930 10:01:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:17.930 10:01:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:17.930 10:01:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:17.930 Cannot find device "nvmf_tgt_br" 00:14:17.930 10:01:16 -- nvmf/common.sh@154 -- # true 00:14:17.930 10:01:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:17.930 Cannot find device "nvmf_tgt_br2" 00:14:17.930 10:01:16 -- nvmf/common.sh@155 -- # true 00:14:17.930 10:01:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:17.930 10:01:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:17.930 Cannot find device "nvmf_tgt_br" 00:14:17.930 10:01:16 -- nvmf/common.sh@157 -- # true 00:14:17.930 10:01:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:18.188 Cannot find device "nvmf_tgt_br2" 00:14:18.188 10:01:16 -- nvmf/common.sh@158 -- # true 00:14:18.188 10:01:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:18.188 10:01:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:18.188 10:01:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:18.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.188 10:01:16 -- nvmf/common.sh@161 -- # true 00:14:18.188 10:01:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:18.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.188 10:01:16 -- nvmf/common.sh@162 -- # true 00:14:18.188 10:01:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:18.188 10:01:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:18.188 10:01:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:18.188 10:01:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:18.189 10:01:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:18.189 10:01:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:18.189 10:01:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:18.189 10:01:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:18.189 10:01:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:18.189 10:01:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:18.189 10:01:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:18.189 10:01:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:18.189 10:01:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:18.189 10:01:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:18.189 10:01:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:18.189 10:01:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:18.189 10:01:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:18.189 10:01:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:18.189 10:01:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:18.189 10:01:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:18.189 10:01:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:18.189 10:01:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:18.189 10:01:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:18.189 10:01:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:18.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:14:18.189 00:14:18.189 --- 10.0.0.2 ping statistics --- 00:14:18.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.189 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:18.189 10:01:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:18.189 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:18.189 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:14:18.189 00:14:18.189 --- 10.0.0.3 ping statistics --- 00:14:18.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.189 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:18.189 10:01:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:18.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:14:18.189 00:14:18.189 --- 10.0.0.1 ping statistics --- 00:14:18.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.189 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:18.189 10:01:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.189 10:01:16 -- nvmf/common.sh@421 -- # return 0 00:14:18.189 10:01:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:18.189 10:01:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.189 10:01:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:18.189 10:01:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:18.189 10:01:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.189 10:01:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:18.189 10:01:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:18.447 10:01:16 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:18.447 10:01:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:18.447 10:01:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:18.447 10:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:18.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.447 10:01:16 -- nvmf/common.sh@469 -- # nvmfpid=83150 00:14:18.447 10:01:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:18.447 10:01:16 -- nvmf/common.sh@470 -- # waitforlisten 83150 00:14:18.447 10:01:16 -- common/autotest_common.sh@829 -- # '[' -z 83150 ']' 00:14:18.447 10:01:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.447 10:01:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.447 10:01:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.447 10:01:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.447 10:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:18.447 [2024-12-16 10:01:16.879894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:18.447 [2024-12-16 10:01:16.879987] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.447 [2024-12-16 10:01:17.019050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:18.705 [2024-12-16 10:01:17.077454] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:18.705 [2024-12-16 10:01:17.077801] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.705 [2024-12-16 10:01:17.078201] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.705 [2024-12-16 10:01:17.078473] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.705 [2024-12-16 10:01:17.078765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.705 [2024-12-16 10:01:17.078917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.705 [2024-12-16 10:01:17.078921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.275 10:01:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.275 10:01:17 -- common/autotest_common.sh@862 -- # return 0 00:14:19.275 10:01:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:19.275 10:01:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:19.275 10:01:17 -- common/autotest_common.sh@10 -- # set +x 00:14:19.275 10:01:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.275 10:01:17 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:19.547 [2024-12-16 10:01:18.114568] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.547 10:01:18 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:20.127 10:01:18 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:20.127 10:01:18 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:20.127 10:01:18 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:20.127 10:01:18 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:20.385 10:01:18 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:20.643 10:01:19 -- target/nvmf_lvol.sh@29 -- # lvs=30276cbb-bb79-49e7-b6e9-361b0f66277c 00:14:20.643 10:01:19 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 30276cbb-bb79-49e7-b6e9-361b0f66277c lvol 20 00:14:20.901 10:01:19 -- target/nvmf_lvol.sh@32 -- # lvol=448e8398-d869-4937-b66b-45a1ad732d03 00:14:20.901 10:01:19 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:21.159 10:01:19 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 448e8398-d869-4937-b66b-45a1ad732d03 00:14:21.418 10:01:19 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:21.676 [2024-12-16 10:01:20.124680] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.676 10:01:20 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:21.934 10:01:20 -- target/nvmf_lvol.sh@42 -- # perf_pid=83301 00:14:21.934 10:01:20 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:21.934 10:01:20 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:22.868 10:01:21 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 448e8398-d869-4937-b66b-45a1ad732d03 MY_SNAPSHOT 00:14:23.127 10:01:21 -- target/nvmf_lvol.sh@47 -- # snapshot=74dc3767-08e5-4705-9585-0dafbe9b4f67 00:14:23.127 10:01:21 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 448e8398-d869-4937-b66b-45a1ad732d03 30 00:14:23.693 10:01:22 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 74dc3767-08e5-4705-9585-0dafbe9b4f67 MY_CLONE 00:14:23.951 10:01:22 -- target/nvmf_lvol.sh@49 -- # clone=a8a74497-5bb4-4d2a-aa59-628580c35ea7 00:14:23.951 10:01:22 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate a8a74497-5bb4-4d2a-aa59-628580c35ea7 00:14:24.517 10:01:22 -- target/nvmf_lvol.sh@53 -- # wait 83301 00:14:32.631 Initializing NVMe Controllers 00:14:32.631 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:32.631 Controller IO queue size 128, less than required. 00:14:32.631 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:32.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:32.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:32.631 Initialization complete. Launching workers. 00:14:32.631 ======================================================== 00:14:32.631 Latency(us) 00:14:32.631 Device Information : IOPS MiB/s Average min max 00:14:32.631 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11832.30 46.22 10821.91 2006.28 69706.91 00:14:32.631 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11765.70 45.96 10878.71 3338.24 57029.72 00:14:32.631 ======================================================== 00:14:32.631 Total : 23598.00 92.18 10850.23 2006.28 69706.91 00:14:32.631 00:14:32.631 10:01:30 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:32.631 10:01:30 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 448e8398-d869-4937-b66b-45a1ad732d03 00:14:32.631 10:01:31 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 30276cbb-bb79-49e7-b6e9-361b0f66277c 00:14:32.890 10:01:31 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:32.890 10:01:31 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:32.890 10:01:31 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:32.890 10:01:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:32.890 10:01:31 -- nvmf/common.sh@116 -- # sync 00:14:32.890 10:01:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:32.890 10:01:31 -- nvmf/common.sh@119 -- # set +e 00:14:32.890 10:01:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:32.890 10:01:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:32.890 rmmod nvme_tcp 00:14:33.148 rmmod nvme_fabrics 00:14:33.148 rmmod nvme_keyring 00:14:33.148 10:01:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:33.148 10:01:31 -- nvmf/common.sh@123 -- # set -e 00:14:33.149 10:01:31 -- nvmf/common.sh@124 -- # return 0 00:14:33.149 10:01:31 -- nvmf/common.sh@477 -- # '[' -n 83150 ']' 00:14:33.149 10:01:31 -- nvmf/common.sh@478 -- # killprocess 83150 00:14:33.149 10:01:31 -- common/autotest_common.sh@936 -- # '[' -z 83150 ']' 00:14:33.149 10:01:31 -- common/autotest_common.sh@940 -- # kill -0 83150 00:14:33.149 10:01:31 -- common/autotest_common.sh@941 -- # uname 00:14:33.149 10:01:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:33.149 10:01:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83150 00:14:33.149 killing process with pid 83150 00:14:33.149 10:01:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:33.149 10:01:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:33.149 10:01:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83150' 00:14:33.149 10:01:31 -- common/autotest_common.sh@955 -- # kill 83150 00:14:33.149 10:01:31 -- common/autotest_common.sh@960 -- # wait 83150 00:14:33.407 10:01:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:33.407 10:01:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:33.407 10:01:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:33.407 10:01:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.407 10:01:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:33.407 10:01:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.407 10:01:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.407 10:01:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.407 10:01:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:33.407 ************************************ 00:14:33.407 END TEST nvmf_lvol 00:14:33.407 ************************************ 00:14:33.407 00:14:33.407 real 0m15.572s 00:14:33.407 user 1m5.228s 00:14:33.407 sys 0m3.746s 00:14:33.407 10:01:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:33.407 10:01:31 -- common/autotest_common.sh@10 -- # set +x 00:14:33.407 10:01:31 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:33.407 10:01:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:33.407 10:01:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.407 10:01:31 -- common/autotest_common.sh@10 -- # set +x 00:14:33.407 ************************************ 00:14:33.407 START TEST nvmf_lvs_grow 00:14:33.407 ************************************ 00:14:33.407 10:01:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:33.407 * Looking for test storage... 00:14:33.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:33.408 10:01:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:33.408 10:01:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:33.408 10:01:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:33.667 10:01:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:33.667 10:01:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:33.667 10:01:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:33.667 10:01:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:33.667 10:01:32 -- scripts/common.sh@335 -- # IFS=.-: 00:14:33.667 10:01:32 -- scripts/common.sh@335 -- # read -ra ver1 00:14:33.667 10:01:32 -- scripts/common.sh@336 -- # IFS=.-: 00:14:33.667 10:01:32 -- scripts/common.sh@336 -- # read -ra ver2 00:14:33.667 10:01:32 -- scripts/common.sh@337 -- # local 'op=<' 00:14:33.667 10:01:32 -- scripts/common.sh@339 -- # ver1_l=2 00:14:33.667 10:01:32 -- scripts/common.sh@340 -- # ver2_l=1 00:14:33.667 10:01:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:33.667 10:01:32 -- scripts/common.sh@343 -- # case "$op" in 00:14:33.667 10:01:32 -- scripts/common.sh@344 -- # : 1 00:14:33.667 10:01:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:33.667 10:01:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:33.667 10:01:32 -- scripts/common.sh@364 -- # decimal 1 00:14:33.667 10:01:32 -- scripts/common.sh@352 -- # local d=1 00:14:33.667 10:01:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:33.667 10:01:32 -- scripts/common.sh@354 -- # echo 1 00:14:33.667 10:01:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:33.667 10:01:32 -- scripts/common.sh@365 -- # decimal 2 00:14:33.667 10:01:32 -- scripts/common.sh@352 -- # local d=2 00:14:33.667 10:01:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:33.667 10:01:32 -- scripts/common.sh@354 -- # echo 2 00:14:33.667 10:01:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:33.667 10:01:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:33.667 10:01:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:33.667 10:01:32 -- scripts/common.sh@367 -- # return 0 00:14:33.667 10:01:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:33.667 10:01:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:33.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.667 --rc genhtml_branch_coverage=1 00:14:33.667 --rc genhtml_function_coverage=1 00:14:33.667 --rc genhtml_legend=1 00:14:33.667 --rc geninfo_all_blocks=1 00:14:33.667 --rc geninfo_unexecuted_blocks=1 00:14:33.667 00:14:33.667 ' 00:14:33.667 10:01:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:33.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.667 --rc genhtml_branch_coverage=1 00:14:33.667 --rc genhtml_function_coverage=1 00:14:33.667 --rc genhtml_legend=1 00:14:33.667 --rc geninfo_all_blocks=1 00:14:33.667 --rc geninfo_unexecuted_blocks=1 00:14:33.667 00:14:33.667 ' 00:14:33.667 10:01:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:33.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.667 --rc genhtml_branch_coverage=1 00:14:33.667 --rc genhtml_function_coverage=1 00:14:33.667 --rc genhtml_legend=1 00:14:33.667 --rc geninfo_all_blocks=1 00:14:33.667 --rc geninfo_unexecuted_blocks=1 00:14:33.667 00:14:33.667 ' 00:14:33.667 10:01:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:33.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.667 --rc genhtml_branch_coverage=1 00:14:33.667 --rc genhtml_function_coverage=1 00:14:33.667 --rc genhtml_legend=1 00:14:33.667 --rc geninfo_all_blocks=1 00:14:33.667 --rc geninfo_unexecuted_blocks=1 00:14:33.667 00:14:33.667 ' 00:14:33.667 10:01:32 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:33.667 10:01:32 -- nvmf/common.sh@7 -- # uname -s 00:14:33.667 10:01:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.667 10:01:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.667 10:01:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.667 10:01:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.667 10:01:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.667 10:01:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.667 10:01:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.667 10:01:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.667 10:01:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.667 10:01:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.667 10:01:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:14:33.667 10:01:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:14:33.667 10:01:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.667 10:01:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.667 10:01:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:33.667 10:01:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:33.667 10:01:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.667 10:01:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.667 10:01:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.667 10:01:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.667 10:01:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.667 10:01:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.667 10:01:32 -- paths/export.sh@5 -- # export PATH 00:14:33.668 10:01:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.668 10:01:32 -- nvmf/common.sh@46 -- # : 0 00:14:33.668 10:01:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:33.668 10:01:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:33.668 10:01:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:33.668 10:01:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.668 10:01:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.668 10:01:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:33.668 10:01:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:33.668 10:01:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:33.668 10:01:32 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:33.668 10:01:32 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:33.668 10:01:32 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:33.668 10:01:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:33.668 10:01:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.668 10:01:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:33.668 10:01:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:33.668 10:01:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:33.668 10:01:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.668 10:01:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.668 10:01:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.668 10:01:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:33.668 10:01:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:33.668 10:01:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:33.668 10:01:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:33.668 10:01:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:33.668 10:01:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:33.668 10:01:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.668 10:01:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.668 10:01:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:33.668 10:01:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:33.668 10:01:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:33.668 10:01:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:33.668 10:01:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:33.668 10:01:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.668 10:01:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:33.668 10:01:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:33.668 10:01:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:33.668 10:01:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:33.668 10:01:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:33.668 10:01:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:33.668 Cannot find device "nvmf_tgt_br" 00:14:33.668 10:01:32 -- nvmf/common.sh@154 -- # true 00:14:33.668 10:01:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:33.668 Cannot find device "nvmf_tgt_br2" 00:14:33.668 10:01:32 -- nvmf/common.sh@155 -- # true 00:14:33.668 10:01:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:33.668 10:01:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:33.668 Cannot find device "nvmf_tgt_br" 00:14:33.668 10:01:32 -- nvmf/common.sh@157 -- # true 00:14:33.668 10:01:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:33.668 Cannot find device "nvmf_tgt_br2" 00:14:33.668 10:01:32 -- nvmf/common.sh@158 -- # true 00:14:33.668 10:01:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:33.668 10:01:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:33.668 10:01:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:33.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:33.668 10:01:32 -- nvmf/common.sh@161 -- # true 00:14:33.668 10:01:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:33.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:33.668 10:01:32 -- nvmf/common.sh@162 -- # true 00:14:33.668 10:01:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:33.668 10:01:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:33.668 10:01:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:33.927 10:01:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:33.927 10:01:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:33.927 10:01:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:33.927 10:01:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:33.927 10:01:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:33.927 10:01:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:33.927 10:01:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:33.927 10:01:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:33.927 10:01:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:33.927 10:01:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:33.927 10:01:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:33.927 10:01:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:33.927 10:01:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:33.927 10:01:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:33.927 10:01:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:33.927 10:01:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:33.927 10:01:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:33.927 10:01:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:33.927 10:01:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:33.927 10:01:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:33.927 10:01:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:33.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:14:33.927 00:14:33.927 --- 10.0.0.2 ping statistics --- 00:14:33.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.927 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:33.927 10:01:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:33.927 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:33.927 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:14:33.927 00:14:33.927 --- 10.0.0.3 ping statistics --- 00:14:33.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.927 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:33.927 10:01:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:33.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:14:33.927 00:14:33.927 --- 10.0.0.1 ping statistics --- 00:14:33.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.927 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:14:33.927 10:01:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.927 10:01:32 -- nvmf/common.sh@421 -- # return 0 00:14:33.927 10:01:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:33.927 10:01:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.927 10:01:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:33.927 10:01:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:33.927 10:01:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.927 10:01:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:33.927 10:01:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:33.927 10:01:32 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:33.927 10:01:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:33.927 10:01:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:33.927 10:01:32 -- common/autotest_common.sh@10 -- # set +x 00:14:33.927 10:01:32 -- nvmf/common.sh@469 -- # nvmfpid=83662 00:14:33.927 10:01:32 -- nvmf/common.sh@470 -- # waitforlisten 83662 00:14:33.927 10:01:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:33.927 10:01:32 -- common/autotest_common.sh@829 -- # '[' -z 83662 ']' 00:14:33.927 10:01:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.927 10:01:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.927 10:01:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.927 10:01:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.927 10:01:32 -- common/autotest_common.sh@10 -- # set +x 00:14:33.927 [2024-12-16 10:01:32.520098] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:33.927 [2024-12-16 10:01:32.520175] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.187 [2024-12-16 10:01:32.651824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.187 [2024-12-16 10:01:32.706735] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:34.187 [2024-12-16 10:01:32.707179] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.187 [2024-12-16 10:01:32.707296] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.187 [2024-12-16 10:01:32.707453] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.187 [2024-12-16 10:01:32.707509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.123 10:01:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.123 10:01:33 -- common/autotest_common.sh@862 -- # return 0 00:14:35.123 10:01:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:35.123 10:01:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:35.123 10:01:33 -- common/autotest_common.sh@10 -- # set +x 00:14:35.123 10:01:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.123 10:01:33 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:35.123 [2024-12-16 10:01:33.718379] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.123 10:01:33 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:35.123 10:01:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:35.123 10:01:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:35.123 10:01:33 -- common/autotest_common.sh@10 -- # set +x 00:14:35.382 ************************************ 00:14:35.382 START TEST lvs_grow_clean 00:14:35.382 ************************************ 00:14:35.382 10:01:33 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:35.382 10:01:33 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:35.382 10:01:33 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:35.382 10:01:33 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:35.382 10:01:33 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:35.382 10:01:33 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:35.382 10:01:33 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:35.382 10:01:33 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:35.382 10:01:33 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:35.382 10:01:33 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:35.640 10:01:34 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:35.640 10:01:34 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:35.899 10:01:34 -- target/nvmf_lvs_grow.sh@28 -- # lvs=ec8b1c2c-8fb3-40de-ab67-02653c3a376b 00:14:35.899 10:01:34 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:35.899 10:01:34 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec8b1c2c-8fb3-40de-ab67-02653c3a376b 00:14:36.158 10:01:34 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:36.158 10:01:34 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:36.158 10:01:34 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ec8b1c2c-8fb3-40de-ab67-02653c3a376b lvol 150 00:14:36.417 10:01:34 -- target/nvmf_lvs_grow.sh@33 -- # lvol=9474cf07-32f1-4647-a509-936e78b002ed 00:14:36.417 10:01:34 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:36.417 10:01:34 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:36.674 [2024-12-16 10:01:35.078480] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:36.674 [2024-12-16 10:01:35.078550] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:36.674 true 00:14:36.674 10:01:35 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec8b1c2c-8fb3-40de-ab67-02653c3a376b 00:14:36.674 10:01:35 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:36.932 10:01:35 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:36.932 10:01:35 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:37.190 10:01:35 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9474cf07-32f1-4647-a509-936e78b002ed 00:14:37.190 10:01:35 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:37.449 [2024-12-16 10:01:36.038995] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.449 10:01:36 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:38.016 10:01:36 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83829 00:14:38.016 10:01:36 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:38.016 10:01:36 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:38.016 10:01:36 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83829 /var/tmp/bdevperf.sock 00:14:38.016 10:01:36 -- common/autotest_common.sh@829 -- # '[' -z 83829 ']' 00:14:38.016 10:01:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:38.016 10:01:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:38.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:38.016 10:01:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:38.016 10:01:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:38.016 10:01:36 -- common/autotest_common.sh@10 -- # set +x 00:14:38.016 [2024-12-16 10:01:36.425629] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:38.016 [2024-12-16 10:01:36.425731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83829 ] 00:14:38.016 [2024-12-16 10:01:36.566653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.017 [2024-12-16 10:01:36.630305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.953 10:01:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.953 10:01:37 -- common/autotest_common.sh@862 -- # return 0 00:14:38.953 10:01:37 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:39.212 Nvme0n1 00:14:39.212 10:01:37 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:39.212 [ 00:14:39.212 { 00:14:39.212 "aliases": [ 00:14:39.212 "9474cf07-32f1-4647-a509-936e78b002ed" 00:14:39.212 ], 00:14:39.212 "assigned_rate_limits": { 00:14:39.212 "r_mbytes_per_sec": 0, 00:14:39.212 "rw_ios_per_sec": 0, 00:14:39.212 "rw_mbytes_per_sec": 0, 00:14:39.212 "w_mbytes_per_sec": 0 00:14:39.212 }, 00:14:39.212 "block_size": 4096, 00:14:39.212 "claimed": false, 00:14:39.212 "driver_specific": { 00:14:39.212 "mp_policy": "active_passive", 00:14:39.212 "nvme": [ 00:14:39.212 { 00:14:39.212 "ctrlr_data": { 00:14:39.212 "ana_reporting": false, 00:14:39.212 "cntlid": 1, 00:14:39.212 "firmware_revision": "24.01.1", 00:14:39.212 "model_number": "SPDK bdev Controller", 00:14:39.212 "multi_ctrlr": true, 00:14:39.212 "oacs": { 00:14:39.212 "firmware": 0, 00:14:39.212 "format": 0, 00:14:39.212 "ns_manage": 0, 00:14:39.212 "security": 0 00:14:39.212 }, 00:14:39.212 "serial_number": "SPDK0", 00:14:39.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:39.212 "vendor_id": "0x8086" 00:14:39.212 }, 00:14:39.212 "ns_data": { 00:14:39.212 "can_share": true, 00:14:39.212 "id": 1 00:14:39.212 }, 00:14:39.212 "trid": { 00:14:39.212 "adrfam": "IPv4", 00:14:39.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:39.212 "traddr": "10.0.0.2", 00:14:39.212 "trsvcid": "4420", 00:14:39.212 "trtype": "TCP" 00:14:39.212 }, 00:14:39.212 "vs": { 00:14:39.212 "nvme_version": "1.3" 00:14:39.212 } 00:14:39.212 } 00:14:39.212 ] 00:14:39.212 }, 00:14:39.212 "name": "Nvme0n1", 00:14:39.212 "num_blocks": 38912, 00:14:39.212 "product_name": "NVMe disk", 00:14:39.212 "supported_io_types": { 00:14:39.212 "abort": true, 00:14:39.212 "compare": true, 00:14:39.212 "compare_and_write": true, 00:14:39.212 "flush": true, 00:14:39.212 "nvme_admin": true, 00:14:39.212 "nvme_io": true, 00:14:39.212 "read": true, 00:14:39.212 "reset": true, 00:14:39.212 "unmap": true, 00:14:39.212 "write": true, 00:14:39.212 "write_zeroes": true 00:14:39.212 }, 00:14:39.212 "uuid": "9474cf07-32f1-4647-a509-936e78b002ed", 00:14:39.212 "zoned": false 00:14:39.212 } 00:14:39.212 ] 00:14:39.212 10:01:37 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83877 00:14:39.212 10:01:37 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:39.212 10:01:37 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:39.471 Running I/O for 10 seconds... 00:14:40.407 Latency(us) 00:14:40.407 [2024-12-16T10:01:39.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.407 [2024-12-16T10:01:39.032Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.407 Nvme0n1 : 1.00 7305.00 28.54 0.00 0.00 0.00 0.00 0.00 00:14:40.407 [2024-12-16T10:01:39.032Z] =================================================================================================================== 00:14:40.407 [2024-12-16T10:01:39.032Z] Total : 7305.00 28.54 0.00 0.00 0.00 0.00 0.00 00:14:40.407 00:14:41.344 10:01:39 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ec8b1c2c-8fb3-40de-ab67-02653c3a376b 00:14:41.344 [2024-12-16T10:01:39.969Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.344 Nvme0n1 : 2.00 7344.00 28.69 0.00 0.00 0.00 0.00 0.00 00:14:41.344 [2024-12-16T10:01:39.969Z] =================================================================================================================== 00:14:41.344 [2024-12-16T10:01:39.969Z] Total : 7344.00 28.69 0.00 0.00 0.00 0.00 0.00 00:14:41.344 00:14:41.603 true 00:14:41.603 10:01:40 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:41.603 10:01:40 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec8b1c2c-8fb3-40de-ab67-02653c3a376b 00:14:41.862 10:01:40 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:41.862 10:01:40 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:41.862 10:01:40 -- target/nvmf_lvs_grow.sh@65 -- # wait 83877 00:14:42.428 [2024-12-16T10:01:41.053Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.428 Nvme0n1 : 3.00 7349.33 28.71 0.00 0.00 0.00 0.00 0.00 00:14:42.428 [2024-12-16T10:01:41.053Z] =================================================================================================================== 00:14:42.428 [2024-12-16T10:01:41.053Z] Total : 7349.33 28.71 0.00 0.00 0.00 0.00 0.00 00:14:42.428 00:14:43.364 [2024-12-16T10:01:41.989Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.364 Nvme0n1 : 4.00 7323.00 28.61 0.00 0.00 0.00 0.00 0.00 00:14:43.364 [2024-12-16T10:01:41.989Z] =================================================================================================================== 00:14:43.364 [2024-12-16T10:01:41.989Z] Total : 7323.00 28.61 0.00 0.00 0.00 0.00 0.00 00:14:43.364 00:14:44.301 [2024-12-16T10:01:42.926Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.301 Nvme0n1 : 5.00 7293.00 28.49 0.00 0.00 0.00 0.00 0.00 00:14:44.301 [2024-12-16T10:01:42.926Z] =================================================================================================================== 00:14:44.301 [2024-12-16T10:01:42.926Z] Total : 7293.00 28.49 0.00 0.00 0.00 0.00 0.00 00:14:44.301 00:14:45.678 [2024-12-16T10:01:44.303Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.678 Nvme0n1 : 6.00 7279.17 28.43 0.00 0.00 0.00 0.00 0.00 00:14:45.678 [2024-12-16T10:01:44.303Z] =================================================================================================================== 00:14:45.678 [2024-12-16T10:01:44.303Z] Total : 7279.17 28.43 0.00 0.00 0.00 0.00 0.00 00:14:45.678 00:14:46.614 [2024-12-16T10:01:45.239Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.614 Nvme0n1 : 7.00 7288.14 28.47 0.00 0.00 0.00 0.00 0.00 00:14:46.614 [2024-12-16T10:01:45.239Z] =================================================================================================================== 00:14:46.614 [2024-12-16T10:01:45.239Z] Total : 7288.14 28.47 0.00 0.00 0.00 0.00 0.00 00:14:46.614 00:14:47.619 [2024-12-16T10:01:46.244Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.619 Nvme0n1 : 8.00 7278.62 28.43 0.00 0.00 0.00 0.00 0.00 00:14:47.619 [2024-12-16T10:01:46.244Z] =================================================================================================================== 00:14:47.619 [2024-12-16T10:01:46.244Z] Total : 7278.62 28.43 0.00 0.00 0.00 0.00 0.00 00:14:47.619 00:14:48.555 [2024-12-16T10:01:47.180Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.555 Nvme0n1 : 9.00 7182.78 28.06 0.00 0.00 0.00 0.00 0.00 00:14:48.555 [2024-12-16T10:01:47.180Z] =================================================================================================================== 00:14:48.555 [2024-12-16T10:01:47.180Z] Total : 7182.78 28.06 0.00 0.00 0.00 0.00 0.00 00:14:48.555 00:14:49.492 [2024-12-16T10:01:48.117Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.492 Nvme0n1 : 10.00 7155.90 27.95 0.00 0.00 0.00 0.00 0.00 00:14:49.492 [2024-12-16T10:01:48.117Z] =================================================================================================================== 00:14:49.492 [2024-12-16T10:01:48.117Z] Total : 7155.90 27.95 0.00 0.00 0.00 0.00 0.00 00:14:49.492 00:14:49.492 00:14:49.492 Latency(us) 00:14:49.492 [2024-12-16T10:01:48.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.492 [2024-12-16T10:01:48.117Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.492 Nvme0n1 : 10.01 7159.69 27.97 0.00 0.00 17871.34 8043.05 136314.88 00:14:49.492 [2024-12-16T10:01:48.117Z] =================================================================================================================== 00:14:49.492 [2024-12-16T10:01:48.117Z] Total : 7159.69 27.97 0.00 0.00 17871.34 8043.05 136314.88 00:14:49.492 0 00:14:49.492 10:01:47 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83829 00:14:49.492 10:01:47 -- common/autotest_common.sh@936 -- # '[' -z 83829 ']' 00:14:49.492 10:01:47 -- common/autotest_common.sh@940 -- # kill -0 83829 00:14:49.492 10:01:47 -- common/autotest_common.sh@941 -- # uname 00:14:49.492 10:01:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:49.492 10:01:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83829 00:14:49.492 10:01:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:49.492 10:01:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:49.492 10:01:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83829' 00:14:49.492 killing process with pid 83829 00:14:49.492 10:01:47 -- common/autotest_common.sh@955 -- # kill 83829 00:14:49.492 Received shutdown signal, test time was about 10.000000 seconds 00:14:49.492 00:14:49.492 Latency(us) 00:14:49.492 [2024-12-16T10:01:48.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.492 [2024-12-16T10:01:48.117Z] =================================================================================================================== 00:14:49.492 [2024-12-16T10:01:48.117Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:49.492 10:01:47 -- common/autotest_common.sh@960 -- # wait 83829 00:14:49.751 10:01:48 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:50.010 10:01:48 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec8b1c2c-8fb3-40de-ab67-02653c3a376b 00:14:50.010 10:01:48 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:50.269 10:01:48 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:50.269 10:01:48 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:50.269 10:01:48 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:50.269 [2024-12-16 10:01:48.839983] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:50.269 10:01:48 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec8b1c2c-8fb3-40de-ab67-02653c3a376b 00:14:50.269 10:01:48 -- common/autotest_common.sh@650 -- # local es=0 00:14:50.269 10:01:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec8b1c2c-8fb3-40de-ab67-02653c3a376b 00:14:50.269 10:01:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:50.269 10:01:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:50.269 10:01:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:50.269 10:01:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:50.269 10:01:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:50.269 10:01:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:50.269 10:01:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:50.269 10:01:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:50.269 10:01:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec8b1c2c-8fb3-40de-ab67-02653c3a376b 00:14:50.527 2024/12/16 10:01:49 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:ec8b1c2c-8fb3-40de-ab67-02653c3a376b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:50.527 request: 00:14:50.527 { 00:14:50.527 "method": "bdev_lvol_get_lvstores", 00:14:50.527 "params": { 00:14:50.527 "uuid": "ec8b1c2c-8fb3-40de-ab67-02653c3a376b" 00:14:50.527 } 00:14:50.527 } 00:14:50.527 Got JSON-RPC error response 00:14:50.527 GoRPCClient: error on JSON-RPC call 00:14:50.527 10:01:49 -- common/autotest_common.sh@653 -- # es=1 00:14:50.528 10:01:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:50.528 10:01:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:50.528 10:01:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:50.528 10:01:49 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:50.786 aio_bdev 00:14:50.786 10:01:49 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 9474cf07-32f1-4647-a509-936e78b002ed 00:14:50.786 10:01:49 -- common/autotest_common.sh@897 -- # local bdev_name=9474cf07-32f1-4647-a509-936e78b002ed 00:14:50.786 10:01:49 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:50.786 10:01:49 -- common/autotest_common.sh@899 -- # local i 00:14:50.786 10:01:49 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:50.786 10:01:49 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:50.786 10:01:49 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:51.045 10:01:49 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9474cf07-32f1-4647-a509-936e78b002ed -t 2000 00:14:51.304 [ 00:14:51.304 { 00:14:51.304 "aliases": [ 00:14:51.304 "lvs/lvol" 00:14:51.304 ], 00:14:51.304 "assigned_rate_limits": { 00:14:51.304 "r_mbytes_per_sec": 0, 00:14:51.304 "rw_ios_per_sec": 0, 00:14:51.304 "rw_mbytes_per_sec": 0, 00:14:51.304 "w_mbytes_per_sec": 0 00:14:51.304 }, 00:14:51.304 "block_size": 4096, 00:14:51.304 "claimed": false, 00:14:51.304 "driver_specific": { 00:14:51.304 "lvol": { 00:14:51.304 "base_bdev": "aio_bdev", 00:14:51.304 "clone": false, 00:14:51.304 "esnap_clone": false, 00:14:51.304 "lvol_store_uuid": "ec8b1c2c-8fb3-40de-ab67-02653c3a376b", 00:14:51.304 "snapshot": false, 00:14:51.304 "thin_provision": false 00:14:51.304 } 00:14:51.304 }, 00:14:51.304 "name": "9474cf07-32f1-4647-a509-936e78b002ed", 00:14:51.304 "num_blocks": 38912, 00:14:51.304 "product_name": "Logical Volume", 00:14:51.304 "supported_io_types": { 00:14:51.304 "abort": false, 00:14:51.304 "compare": false, 00:14:51.304 "compare_and_write": false, 00:14:51.304 "flush": false, 00:14:51.304 "nvme_admin": false, 00:14:51.304 "nvme_io": false, 00:14:51.304 "read": true, 00:14:51.304 "reset": true, 00:14:51.304 "unmap": true, 00:14:51.304 "write": true, 00:14:51.304 "write_zeroes": true 00:14:51.304 }, 00:14:51.304 "uuid": "9474cf07-32f1-4647-a509-936e78b002ed", 00:14:51.304 "zoned": false 00:14:51.304 } 00:14:51.304 ] 00:14:51.304 10:01:49 -- common/autotest_common.sh@905 -- # return 0 00:14:51.304 10:01:49 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec8b1c2c-8fb3-40de-ab67-02653c3a376b 00:14:51.304 10:01:49 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:51.562 10:01:50 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:51.562 10:01:50 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ec8b1c2c-8fb3-40de-ab67-02653c3a376b 00:14:51.562 10:01:50 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:51.821 10:01:50 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:51.821 10:01:50 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9474cf07-32f1-4647-a509-936e78b002ed 00:14:52.080 10:01:50 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec8b1c2c-8fb3-40de-ab67-02653c3a376b 00:14:52.080 10:01:50 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:52.339 10:01:50 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:52.907 ************************************ 00:14:52.907 END TEST lvs_grow_clean 00:14:52.907 ************************************ 00:14:52.907 00:14:52.907 real 0m17.514s 00:14:52.907 user 0m16.883s 00:14:52.907 sys 0m2.035s 00:14:52.907 10:01:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:52.907 10:01:51 -- common/autotest_common.sh@10 -- # set +x 00:14:52.907 10:01:51 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:52.907 10:01:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:52.907 10:01:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.907 10:01:51 -- common/autotest_common.sh@10 -- # set +x 00:14:52.907 ************************************ 00:14:52.907 START TEST lvs_grow_dirty 00:14:52.907 ************************************ 00:14:52.907 10:01:51 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:14:52.907 10:01:51 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:52.907 10:01:51 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:52.907 10:01:51 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:52.907 10:01:51 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:52.907 10:01:51 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:52.907 10:01:51 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:52.907 10:01:51 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:52.907 10:01:51 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:52.907 10:01:51 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:53.165 10:01:51 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:53.165 10:01:51 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:53.424 10:01:51 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a4de892f-38ce-4c52-8508-8909fef03af0 00:14:53.424 10:01:51 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4de892f-38ce-4c52-8508-8909fef03af0 00:14:53.424 10:01:51 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:53.683 10:01:52 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:53.683 10:01:52 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:53.683 10:01:52 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a4de892f-38ce-4c52-8508-8909fef03af0 lvol 150 00:14:53.941 10:01:52 -- target/nvmf_lvs_grow.sh@33 -- # lvol=53f16359-1628-401c-a806-1a11f3973e6c 00:14:53.941 10:01:52 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:53.941 10:01:52 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:54.200 [2024-12-16 10:01:52.638245] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:54.200 [2024-12-16 10:01:52.638312] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:54.200 true 00:14:54.200 10:01:52 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4de892f-38ce-4c52-8508-8909fef03af0 00:14:54.200 10:01:52 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:54.459 10:01:52 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:54.459 10:01:52 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:54.718 10:01:53 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 53f16359-1628-401c-a806-1a11f3973e6c 00:14:54.718 10:01:53 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:54.976 10:01:53 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:55.544 10:01:53 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=84257 00:14:55.544 10:01:53 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:55.544 10:01:53 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:55.544 10:01:53 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 84257 /var/tmp/bdevperf.sock 00:14:55.544 10:01:53 -- common/autotest_common.sh@829 -- # '[' -z 84257 ']' 00:14:55.544 10:01:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:55.544 10:01:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:55.544 10:01:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:55.544 10:01:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.544 10:01:53 -- common/autotest_common.sh@10 -- # set +x 00:14:55.544 [2024-12-16 10:01:53.916880] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:55.544 [2024-12-16 10:01:53.916981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84257 ] 00:14:55.544 [2024-12-16 10:01:54.057279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.544 [2024-12-16 10:01:54.119565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.480 10:01:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.480 10:01:54 -- common/autotest_common.sh@862 -- # return 0 00:14:56.480 10:01:54 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:56.480 Nvme0n1 00:14:56.737 10:01:55 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:56.995 [ 00:14:56.995 { 00:14:56.995 "aliases": [ 00:14:56.995 "53f16359-1628-401c-a806-1a11f3973e6c" 00:14:56.995 ], 00:14:56.996 "assigned_rate_limits": { 00:14:56.996 "r_mbytes_per_sec": 0, 00:14:56.996 "rw_ios_per_sec": 0, 00:14:56.996 "rw_mbytes_per_sec": 0, 00:14:56.996 "w_mbytes_per_sec": 0 00:14:56.996 }, 00:14:56.996 "block_size": 4096, 00:14:56.996 "claimed": false, 00:14:56.996 "driver_specific": { 00:14:56.996 "mp_policy": "active_passive", 00:14:56.996 "nvme": [ 00:14:56.996 { 00:14:56.996 "ctrlr_data": { 00:14:56.996 "ana_reporting": false, 00:14:56.996 "cntlid": 1, 00:14:56.996 "firmware_revision": "24.01.1", 00:14:56.996 "model_number": "SPDK bdev Controller", 00:14:56.996 "multi_ctrlr": true, 00:14:56.996 "oacs": { 00:14:56.996 "firmware": 0, 00:14:56.996 "format": 0, 00:14:56.996 "ns_manage": 0, 00:14:56.996 "security": 0 00:14:56.996 }, 00:14:56.996 "serial_number": "SPDK0", 00:14:56.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:56.996 "vendor_id": "0x8086" 00:14:56.996 }, 00:14:56.996 "ns_data": { 00:14:56.996 "can_share": true, 00:14:56.996 "id": 1 00:14:56.996 }, 00:14:56.996 "trid": { 00:14:56.996 "adrfam": "IPv4", 00:14:56.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:56.996 "traddr": "10.0.0.2", 00:14:56.996 "trsvcid": "4420", 00:14:56.996 "trtype": "TCP" 00:14:56.996 }, 00:14:56.996 "vs": { 00:14:56.996 "nvme_version": "1.3" 00:14:56.996 } 00:14:56.996 } 00:14:56.996 ] 00:14:56.996 }, 00:14:56.996 "name": "Nvme0n1", 00:14:56.996 "num_blocks": 38912, 00:14:56.996 "product_name": "NVMe disk", 00:14:56.996 "supported_io_types": { 00:14:56.996 "abort": true, 00:14:56.996 "compare": true, 00:14:56.996 "compare_and_write": true, 00:14:56.996 "flush": true, 00:14:56.996 "nvme_admin": true, 00:14:56.996 "nvme_io": true, 00:14:56.996 "read": true, 00:14:56.996 "reset": true, 00:14:56.996 "unmap": true, 00:14:56.996 "write": true, 00:14:56.996 "write_zeroes": true 00:14:56.996 }, 00:14:56.996 "uuid": "53f16359-1628-401c-a806-1a11f3973e6c", 00:14:56.996 "zoned": false 00:14:56.996 } 00:14:56.996 ] 00:14:56.996 10:01:55 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:56.996 10:01:55 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=84309 00:14:56.996 10:01:55 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:56.996 Running I/O for 10 seconds... 00:14:57.931 Latency(us) 00:14:57.931 [2024-12-16T10:01:56.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.931 [2024-12-16T10:01:56.556Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.931 Nvme0n1 : 1.00 7527.00 29.40 0.00 0.00 0.00 0.00 0.00 00:14:57.931 [2024-12-16T10:01:56.556Z] =================================================================================================================== 00:14:57.931 [2024-12-16T10:01:56.556Z] Total : 7527.00 29.40 0.00 0.00 0.00 0.00 0.00 00:14:57.931 00:14:58.867 10:01:57 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a4de892f-38ce-4c52-8508-8909fef03af0 00:14:58.867 [2024-12-16T10:01:57.492Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.867 Nvme0n1 : 2.00 7361.50 28.76 0.00 0.00 0.00 0.00 0.00 00:14:58.867 [2024-12-16T10:01:57.492Z] =================================================================================================================== 00:14:58.867 [2024-12-16T10:01:57.492Z] Total : 7361.50 28.76 0.00 0.00 0.00 0.00 0.00 00:14:58.867 00:14:59.126 true 00:14:59.126 10:01:57 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4de892f-38ce-4c52-8508-8909fef03af0 00:14:59.126 10:01:57 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:59.693 10:01:58 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:59.693 10:01:58 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:59.693 10:01:58 -- target/nvmf_lvs_grow.sh@65 -- # wait 84309 00:14:59.951 [2024-12-16T10:01:58.576Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.951 Nvme0n1 : 3.00 7363.00 28.76 0.00 0.00 0.00 0.00 0.00 00:14:59.951 [2024-12-16T10:01:58.576Z] =================================================================================================================== 00:14:59.951 [2024-12-16T10:01:58.576Z] Total : 7363.00 28.76 0.00 0.00 0.00 0.00 0.00 00:14:59.951 00:15:00.886 [2024-12-16T10:01:59.511Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.886 Nvme0n1 : 4.00 7351.50 28.72 0.00 0.00 0.00 0.00 0.00 00:15:00.886 [2024-12-16T10:01:59.511Z] =================================================================================================================== 00:15:00.886 [2024-12-16T10:01:59.511Z] Total : 7351.50 28.72 0.00 0.00 0.00 0.00 0.00 00:15:00.886 00:15:02.262 [2024-12-16T10:02:00.887Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.262 Nvme0n1 : 5.00 7399.40 28.90 0.00 0.00 0.00 0.00 0.00 00:15:02.262 [2024-12-16T10:02:00.887Z] =================================================================================================================== 00:15:02.262 [2024-12-16T10:02:00.887Z] Total : 7399.40 28.90 0.00 0.00 0.00 0.00 0.00 00:15:02.262 00:15:03.196 [2024-12-16T10:02:01.821Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.196 Nvme0n1 : 6.00 7261.67 28.37 0.00 0.00 0.00 0.00 0.00 00:15:03.196 [2024-12-16T10:02:01.821Z] =================================================================================================================== 00:15:03.196 [2024-12-16T10:02:01.821Z] Total : 7261.67 28.37 0.00 0.00 0.00 0.00 0.00 00:15:03.196 00:15:04.132 [2024-12-16T10:02:02.757Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.132 Nvme0n1 : 7.00 7241.71 28.29 0.00 0.00 0.00 0.00 0.00 00:15:04.132 [2024-12-16T10:02:02.757Z] =================================================================================================================== 00:15:04.132 [2024-12-16T10:02:02.757Z] Total : 7241.71 28.29 0.00 0.00 0.00 0.00 0.00 00:15:04.132 00:15:05.067 [2024-12-16T10:02:03.692Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.067 Nvme0n1 : 8.00 7237.12 28.27 0.00 0.00 0.00 0.00 0.00 00:15:05.067 [2024-12-16T10:02:03.692Z] =================================================================================================================== 00:15:05.067 [2024-12-16T10:02:03.692Z] Total : 7237.12 28.27 0.00 0.00 0.00 0.00 0.00 00:15:05.067 00:15:06.003 [2024-12-16T10:02:04.628Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.003 Nvme0n1 : 9.00 7238.44 28.28 0.00 0.00 0.00 0.00 0.00 00:15:06.003 [2024-12-16T10:02:04.628Z] =================================================================================================================== 00:15:06.003 [2024-12-16T10:02:04.628Z] Total : 7238.44 28.28 0.00 0.00 0.00 0.00 0.00 00:15:06.003 00:15:06.940 [2024-12-16T10:02:05.565Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.940 Nvme0n1 : 10.00 7252.40 28.33 0.00 0.00 0.00 0.00 0.00 00:15:06.940 [2024-12-16T10:02:05.565Z] =================================================================================================================== 00:15:06.940 [2024-12-16T10:02:05.565Z] Total : 7252.40 28.33 0.00 0.00 0.00 0.00 0.00 00:15:06.940 00:15:06.940 00:15:06.940 Latency(us) 00:15:06.940 [2024-12-16T10:02:05.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.940 [2024-12-16T10:02:05.565Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.940 Nvme0n1 : 10.02 7258.55 28.35 0.00 0.00 17629.74 5987.61 137268.13 00:15:06.940 [2024-12-16T10:02:05.565Z] =================================================================================================================== 00:15:06.940 [2024-12-16T10:02:05.565Z] Total : 7258.55 28.35 0.00 0.00 17629.74 5987.61 137268.13 00:15:06.940 0 00:15:06.940 10:02:05 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 84257 00:15:06.940 10:02:05 -- common/autotest_common.sh@936 -- # '[' -z 84257 ']' 00:15:06.940 10:02:05 -- common/autotest_common.sh@940 -- # kill -0 84257 00:15:06.940 10:02:05 -- common/autotest_common.sh@941 -- # uname 00:15:06.940 10:02:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:06.940 10:02:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84257 00:15:06.940 10:02:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:06.940 10:02:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:06.940 killing process with pid 84257 00:15:06.940 10:02:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84257' 00:15:06.940 Received shutdown signal, test time was about 10.000000 seconds 00:15:06.940 00:15:06.940 Latency(us) 00:15:06.940 [2024-12-16T10:02:05.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.940 [2024-12-16T10:02:05.565Z] =================================================================================================================== 00:15:06.940 [2024-12-16T10:02:05.565Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:06.940 10:02:05 -- common/autotest_common.sh@955 -- # kill 84257 00:15:06.940 10:02:05 -- common/autotest_common.sh@960 -- # wait 84257 00:15:07.199 10:02:05 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:07.458 10:02:06 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4de892f-38ce-4c52-8508-8909fef03af0 00:15:07.458 10:02:06 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:07.716 10:02:06 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:07.716 10:02:06 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:07.716 10:02:06 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83662 00:15:07.716 10:02:06 -- target/nvmf_lvs_grow.sh@74 -- # wait 83662 00:15:07.716 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83662 Killed "${NVMF_APP[@]}" "$@" 00:15:07.716 10:02:06 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:07.716 10:02:06 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:07.716 10:02:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:07.716 10:02:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:07.716 10:02:06 -- common/autotest_common.sh@10 -- # set +x 00:15:07.716 10:02:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:07.716 10:02:06 -- nvmf/common.sh@469 -- # nvmfpid=84455 00:15:07.716 10:02:06 -- nvmf/common.sh@470 -- # waitforlisten 84455 00:15:07.716 10:02:06 -- common/autotest_common.sh@829 -- # '[' -z 84455 ']' 00:15:07.716 10:02:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.716 10:02:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.716 10:02:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.716 10:02:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.716 10:02:06 -- common/autotest_common.sh@10 -- # set +x 00:15:07.716 [2024-12-16 10:02:06.296787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:07.716 [2024-12-16 10:02:06.296889] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.975 [2024-12-16 10:02:06.430067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.975 [2024-12-16 10:02:06.482373] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:07.975 [2024-12-16 10:02:06.482554] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.975 [2024-12-16 10:02:06.482567] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.975 [2024-12-16 10:02:06.482575] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.975 [2024-12-16 10:02:06.482606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.911 10:02:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.911 10:02:07 -- common/autotest_common.sh@862 -- # return 0 00:15:08.911 10:02:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:08.911 10:02:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:08.911 10:02:07 -- common/autotest_common.sh@10 -- # set +x 00:15:08.911 10:02:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.911 10:02:07 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:09.170 [2024-12-16 10:02:07.543917] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:09.170 [2024-12-16 10:02:07.544332] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:09.170 [2024-12-16 10:02:07.544570] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:09.170 10:02:07 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:09.170 10:02:07 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 53f16359-1628-401c-a806-1a11f3973e6c 00:15:09.170 10:02:07 -- common/autotest_common.sh@897 -- # local bdev_name=53f16359-1628-401c-a806-1a11f3973e6c 00:15:09.170 10:02:07 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:09.170 10:02:07 -- common/autotest_common.sh@899 -- # local i 00:15:09.170 10:02:07 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:09.170 10:02:07 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:09.170 10:02:07 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:09.429 10:02:07 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 53f16359-1628-401c-a806-1a11f3973e6c -t 2000 00:15:09.687 [ 00:15:09.687 { 00:15:09.687 "aliases": [ 00:15:09.687 "lvs/lvol" 00:15:09.687 ], 00:15:09.687 "assigned_rate_limits": { 00:15:09.687 "r_mbytes_per_sec": 0, 00:15:09.687 "rw_ios_per_sec": 0, 00:15:09.687 "rw_mbytes_per_sec": 0, 00:15:09.687 "w_mbytes_per_sec": 0 00:15:09.687 }, 00:15:09.688 "block_size": 4096, 00:15:09.688 "claimed": false, 00:15:09.688 "driver_specific": { 00:15:09.688 "lvol": { 00:15:09.688 "base_bdev": "aio_bdev", 00:15:09.688 "clone": false, 00:15:09.688 "esnap_clone": false, 00:15:09.688 "lvol_store_uuid": "a4de892f-38ce-4c52-8508-8909fef03af0", 00:15:09.688 "snapshot": false, 00:15:09.688 "thin_provision": false 00:15:09.688 } 00:15:09.688 }, 00:15:09.688 "name": "53f16359-1628-401c-a806-1a11f3973e6c", 00:15:09.688 "num_blocks": 38912, 00:15:09.688 "product_name": "Logical Volume", 00:15:09.688 "supported_io_types": { 00:15:09.688 "abort": false, 00:15:09.688 "compare": false, 00:15:09.688 "compare_and_write": false, 00:15:09.688 "flush": false, 00:15:09.688 "nvme_admin": false, 00:15:09.688 "nvme_io": false, 00:15:09.688 "read": true, 00:15:09.688 "reset": true, 00:15:09.688 "unmap": true, 00:15:09.688 "write": true, 00:15:09.688 "write_zeroes": true 00:15:09.688 }, 00:15:09.688 "uuid": "53f16359-1628-401c-a806-1a11f3973e6c", 00:15:09.688 "zoned": false 00:15:09.688 } 00:15:09.688 ] 00:15:09.688 10:02:08 -- common/autotest_common.sh@905 -- # return 0 00:15:09.688 10:02:08 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4de892f-38ce-4c52-8508-8909fef03af0 00:15:09.688 10:02:08 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:09.688 10:02:08 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:09.688 10:02:08 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4de892f-38ce-4c52-8508-8909fef03af0 00:15:09.688 10:02:08 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:09.950 10:02:08 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:09.950 10:02:08 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:10.235 [2024-12-16 10:02:08.773331] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:10.235 10:02:08 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4de892f-38ce-4c52-8508-8909fef03af0 00:15:10.235 10:02:08 -- common/autotest_common.sh@650 -- # local es=0 00:15:10.235 10:02:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4de892f-38ce-4c52-8508-8909fef03af0 00:15:10.235 10:02:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.235 10:02:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.235 10:02:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.235 10:02:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.235 10:02:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.235 10:02:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.235 10:02:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.235 10:02:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:10.235 10:02:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4de892f-38ce-4c52-8508-8909fef03af0 00:15:10.536 2024/12/16 10:02:09 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:a4de892f-38ce-4c52-8508-8909fef03af0], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:10.536 request: 00:15:10.536 { 00:15:10.537 "method": "bdev_lvol_get_lvstores", 00:15:10.537 "params": { 00:15:10.537 "uuid": "a4de892f-38ce-4c52-8508-8909fef03af0" 00:15:10.537 } 00:15:10.537 } 00:15:10.537 Got JSON-RPC error response 00:15:10.537 GoRPCClient: error on JSON-RPC call 00:15:10.537 10:02:09 -- common/autotest_common.sh@653 -- # es=1 00:15:10.537 10:02:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:10.537 10:02:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:10.537 10:02:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:10.537 10:02:09 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:10.795 aio_bdev 00:15:10.795 10:02:09 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 53f16359-1628-401c-a806-1a11f3973e6c 00:15:10.795 10:02:09 -- common/autotest_common.sh@897 -- # local bdev_name=53f16359-1628-401c-a806-1a11f3973e6c 00:15:10.795 10:02:09 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:10.795 10:02:09 -- common/autotest_common.sh@899 -- # local i 00:15:10.795 10:02:09 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:10.795 10:02:09 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:10.795 10:02:09 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:11.054 10:02:09 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 53f16359-1628-401c-a806-1a11f3973e6c -t 2000 00:15:11.313 [ 00:15:11.313 { 00:15:11.313 "aliases": [ 00:15:11.313 "lvs/lvol" 00:15:11.313 ], 00:15:11.313 "assigned_rate_limits": { 00:15:11.313 "r_mbytes_per_sec": 0, 00:15:11.313 "rw_ios_per_sec": 0, 00:15:11.313 "rw_mbytes_per_sec": 0, 00:15:11.313 "w_mbytes_per_sec": 0 00:15:11.313 }, 00:15:11.313 "block_size": 4096, 00:15:11.313 "claimed": false, 00:15:11.313 "driver_specific": { 00:15:11.313 "lvol": { 00:15:11.313 "base_bdev": "aio_bdev", 00:15:11.313 "clone": false, 00:15:11.313 "esnap_clone": false, 00:15:11.313 "lvol_store_uuid": "a4de892f-38ce-4c52-8508-8909fef03af0", 00:15:11.313 "snapshot": false, 00:15:11.313 "thin_provision": false 00:15:11.313 } 00:15:11.313 }, 00:15:11.313 "name": "53f16359-1628-401c-a806-1a11f3973e6c", 00:15:11.313 "num_blocks": 38912, 00:15:11.313 "product_name": "Logical Volume", 00:15:11.313 "supported_io_types": { 00:15:11.313 "abort": false, 00:15:11.313 "compare": false, 00:15:11.313 "compare_and_write": false, 00:15:11.313 "flush": false, 00:15:11.313 "nvme_admin": false, 00:15:11.313 "nvme_io": false, 00:15:11.313 "read": true, 00:15:11.313 "reset": true, 00:15:11.313 "unmap": true, 00:15:11.313 "write": true, 00:15:11.313 "write_zeroes": true 00:15:11.313 }, 00:15:11.313 "uuid": "53f16359-1628-401c-a806-1a11f3973e6c", 00:15:11.313 "zoned": false 00:15:11.313 } 00:15:11.313 ] 00:15:11.313 10:02:09 -- common/autotest_common.sh@905 -- # return 0 00:15:11.313 10:02:09 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4de892f-38ce-4c52-8508-8909fef03af0 00:15:11.313 10:02:09 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:11.570 10:02:09 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:11.570 10:02:09 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4de892f-38ce-4c52-8508-8909fef03af0 00:15:11.570 10:02:09 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:11.570 10:02:10 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:11.828 10:02:10 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 53f16359-1628-401c-a806-1a11f3973e6c 00:15:11.828 10:02:10 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4de892f-38ce-4c52-8508-8909fef03af0 00:15:12.086 10:02:10 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:12.344 10:02:10 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:12.603 00:15:12.603 real 0m19.828s 00:15:12.603 user 0m38.476s 00:15:12.603 sys 0m9.788s 00:15:12.603 10:02:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:12.603 10:02:11 -- common/autotest_common.sh@10 -- # set +x 00:15:12.603 ************************************ 00:15:12.603 END TEST lvs_grow_dirty 00:15:12.603 ************************************ 00:15:12.603 10:02:11 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:12.603 10:02:11 -- common/autotest_common.sh@806 -- # type=--id 00:15:12.603 10:02:11 -- common/autotest_common.sh@807 -- # id=0 00:15:12.603 10:02:11 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:12.603 10:02:11 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:12.603 10:02:11 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:12.603 10:02:11 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:12.603 10:02:11 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:12.603 10:02:11 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:12.603 nvmf_trace.0 00:15:12.603 10:02:11 -- common/autotest_common.sh@821 -- # return 0 00:15:12.603 10:02:11 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:12.603 10:02:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:12.603 10:02:11 -- nvmf/common.sh@116 -- # sync 00:15:13.171 10:02:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:13.171 10:02:11 -- nvmf/common.sh@119 -- # set +e 00:15:13.171 10:02:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:13.171 10:02:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:13.171 rmmod nvme_tcp 00:15:13.171 rmmod nvme_fabrics 00:15:13.171 rmmod nvme_keyring 00:15:13.171 10:02:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:13.171 10:02:11 -- nvmf/common.sh@123 -- # set -e 00:15:13.171 10:02:11 -- nvmf/common.sh@124 -- # return 0 00:15:13.171 10:02:11 -- nvmf/common.sh@477 -- # '[' -n 84455 ']' 00:15:13.171 10:02:11 -- nvmf/common.sh@478 -- # killprocess 84455 00:15:13.171 10:02:11 -- common/autotest_common.sh@936 -- # '[' -z 84455 ']' 00:15:13.171 10:02:11 -- common/autotest_common.sh@940 -- # kill -0 84455 00:15:13.171 10:02:11 -- common/autotest_common.sh@941 -- # uname 00:15:13.171 10:02:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:13.171 10:02:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84455 00:15:13.171 10:02:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:13.171 10:02:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:13.171 10:02:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84455' 00:15:13.171 killing process with pid 84455 00:15:13.171 10:02:11 -- common/autotest_common.sh@955 -- # kill 84455 00:15:13.171 10:02:11 -- common/autotest_common.sh@960 -- # wait 84455 00:15:13.171 10:02:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:13.171 10:02:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:13.171 10:02:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:13.171 10:02:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:13.171 10:02:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:13.171 10:02:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.171 10:02:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.171 10:02:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.430 10:02:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:13.430 00:15:13.430 real 0m39.917s 00:15:13.430 user 1m1.474s 00:15:13.430 sys 0m12.686s 00:15:13.430 10:02:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:13.430 10:02:11 -- common/autotest_common.sh@10 -- # set +x 00:15:13.430 ************************************ 00:15:13.430 END TEST nvmf_lvs_grow 00:15:13.430 ************************************ 00:15:13.430 10:02:11 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:13.430 10:02:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:13.430 10:02:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:13.430 10:02:11 -- common/autotest_common.sh@10 -- # set +x 00:15:13.430 ************************************ 00:15:13.430 START TEST nvmf_bdev_io_wait 00:15:13.430 ************************************ 00:15:13.430 10:02:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:13.430 * Looking for test storage... 00:15:13.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:13.430 10:02:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:13.430 10:02:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:13.430 10:02:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:13.430 10:02:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:13.430 10:02:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:13.430 10:02:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:13.430 10:02:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:13.430 10:02:12 -- scripts/common.sh@335 -- # IFS=.-: 00:15:13.430 10:02:12 -- scripts/common.sh@335 -- # read -ra ver1 00:15:13.430 10:02:12 -- scripts/common.sh@336 -- # IFS=.-: 00:15:13.430 10:02:12 -- scripts/common.sh@336 -- # read -ra ver2 00:15:13.430 10:02:12 -- scripts/common.sh@337 -- # local 'op=<' 00:15:13.430 10:02:12 -- scripts/common.sh@339 -- # ver1_l=2 00:15:13.430 10:02:12 -- scripts/common.sh@340 -- # ver2_l=1 00:15:13.430 10:02:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:13.430 10:02:12 -- scripts/common.sh@343 -- # case "$op" in 00:15:13.430 10:02:12 -- scripts/common.sh@344 -- # : 1 00:15:13.430 10:02:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:13.430 10:02:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:13.430 10:02:12 -- scripts/common.sh@364 -- # decimal 1 00:15:13.430 10:02:12 -- scripts/common.sh@352 -- # local d=1 00:15:13.430 10:02:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:13.430 10:02:12 -- scripts/common.sh@354 -- # echo 1 00:15:13.430 10:02:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:13.430 10:02:12 -- scripts/common.sh@365 -- # decimal 2 00:15:13.430 10:02:12 -- scripts/common.sh@352 -- # local d=2 00:15:13.430 10:02:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:13.430 10:02:12 -- scripts/common.sh@354 -- # echo 2 00:15:13.430 10:02:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:13.430 10:02:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:13.430 10:02:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:13.430 10:02:12 -- scripts/common.sh@367 -- # return 0 00:15:13.430 10:02:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:13.430 10:02:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:13.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.430 --rc genhtml_branch_coverage=1 00:15:13.430 --rc genhtml_function_coverage=1 00:15:13.430 --rc genhtml_legend=1 00:15:13.430 --rc geninfo_all_blocks=1 00:15:13.430 --rc geninfo_unexecuted_blocks=1 00:15:13.430 00:15:13.430 ' 00:15:13.430 10:02:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:13.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.430 --rc genhtml_branch_coverage=1 00:15:13.430 --rc genhtml_function_coverage=1 00:15:13.430 --rc genhtml_legend=1 00:15:13.431 --rc geninfo_all_blocks=1 00:15:13.431 --rc geninfo_unexecuted_blocks=1 00:15:13.431 00:15:13.431 ' 00:15:13.431 10:02:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:13.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.431 --rc genhtml_branch_coverage=1 00:15:13.431 --rc genhtml_function_coverage=1 00:15:13.431 --rc genhtml_legend=1 00:15:13.431 --rc geninfo_all_blocks=1 00:15:13.431 --rc geninfo_unexecuted_blocks=1 00:15:13.431 00:15:13.431 ' 00:15:13.431 10:02:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:13.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.431 --rc genhtml_branch_coverage=1 00:15:13.431 --rc genhtml_function_coverage=1 00:15:13.431 --rc genhtml_legend=1 00:15:13.431 --rc geninfo_all_blocks=1 00:15:13.431 --rc geninfo_unexecuted_blocks=1 00:15:13.431 00:15:13.431 ' 00:15:13.431 10:02:12 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:13.431 10:02:12 -- nvmf/common.sh@7 -- # uname -s 00:15:13.431 10:02:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.431 10:02:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.431 10:02:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.431 10:02:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.431 10:02:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.431 10:02:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.431 10:02:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.431 10:02:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.431 10:02:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.431 10:02:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.431 10:02:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:15:13.431 10:02:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:15:13.431 10:02:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.431 10:02:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.431 10:02:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:13.431 10:02:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:13.431 10:02:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.431 10:02:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.431 10:02:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.431 10:02:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.431 10:02:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.431 10:02:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.431 10:02:12 -- paths/export.sh@5 -- # export PATH 00:15:13.431 10:02:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.431 10:02:12 -- nvmf/common.sh@46 -- # : 0 00:15:13.431 10:02:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:13.431 10:02:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:13.431 10:02:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:13.431 10:02:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.431 10:02:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.431 10:02:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:13.431 10:02:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:13.431 10:02:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:13.431 10:02:12 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:13.431 10:02:12 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:13.431 10:02:12 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:13.431 10:02:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:13.431 10:02:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.691 10:02:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:13.691 10:02:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:13.691 10:02:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:13.691 10:02:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.691 10:02:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.691 10:02:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.691 10:02:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:13.691 10:02:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:13.691 10:02:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:13.691 10:02:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:13.691 10:02:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:13.691 10:02:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:13.691 10:02:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.691 10:02:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:13.691 10:02:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:13.691 10:02:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:13.691 10:02:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:13.691 10:02:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:13.691 10:02:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:13.691 10:02:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.691 10:02:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:13.691 10:02:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:13.691 10:02:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:13.691 10:02:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:13.691 10:02:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:13.691 10:02:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:13.691 Cannot find device "nvmf_tgt_br" 00:15:13.691 10:02:12 -- nvmf/common.sh@154 -- # true 00:15:13.691 10:02:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:13.691 Cannot find device "nvmf_tgt_br2" 00:15:13.691 10:02:12 -- nvmf/common.sh@155 -- # true 00:15:13.691 10:02:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:13.691 10:02:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:13.691 Cannot find device "nvmf_tgt_br" 00:15:13.691 10:02:12 -- nvmf/common.sh@157 -- # true 00:15:13.691 10:02:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:13.691 Cannot find device "nvmf_tgt_br2" 00:15:13.691 10:02:12 -- nvmf/common.sh@158 -- # true 00:15:13.691 10:02:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:13.691 10:02:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:13.691 10:02:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:13.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:13.691 10:02:12 -- nvmf/common.sh@161 -- # true 00:15:13.691 10:02:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:13.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:13.691 10:02:12 -- nvmf/common.sh@162 -- # true 00:15:13.691 10:02:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:13.691 10:02:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:13.691 10:02:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:13.691 10:02:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:13.691 10:02:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:13.691 10:02:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:13.691 10:02:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:13.691 10:02:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:13.691 10:02:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:13.691 10:02:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:13.691 10:02:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:13.691 10:02:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:13.691 10:02:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:13.691 10:02:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:13.691 10:02:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:13.691 10:02:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:13.691 10:02:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:13.691 10:02:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:13.691 10:02:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:13.950 10:02:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:13.950 10:02:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:13.950 10:02:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:13.950 10:02:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:13.950 10:02:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:13.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:13.950 00:15:13.950 --- 10.0.0.2 ping statistics --- 00:15:13.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.950 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:13.950 10:02:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:13.950 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:13.950 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:13.950 00:15:13.950 --- 10.0.0.3 ping statistics --- 00:15:13.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.950 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:13.950 10:02:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:13.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:13.950 00:15:13.950 --- 10.0.0.1 ping statistics --- 00:15:13.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.950 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:13.950 10:02:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.950 10:02:12 -- nvmf/common.sh@421 -- # return 0 00:15:13.950 10:02:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:13.950 10:02:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.950 10:02:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:13.950 10:02:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:13.950 10:02:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.950 10:02:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:13.950 10:02:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:13.950 10:02:12 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:13.950 10:02:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:13.950 10:02:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:13.950 10:02:12 -- common/autotest_common.sh@10 -- # set +x 00:15:13.950 10:02:12 -- nvmf/common.sh@469 -- # nvmfpid=84886 00:15:13.950 10:02:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:13.950 10:02:12 -- nvmf/common.sh@470 -- # waitforlisten 84886 00:15:13.950 10:02:12 -- common/autotest_common.sh@829 -- # '[' -z 84886 ']' 00:15:13.950 10:02:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.950 10:02:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.950 10:02:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.950 10:02:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.950 10:02:12 -- common/autotest_common.sh@10 -- # set +x 00:15:13.950 [2024-12-16 10:02:12.449386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:13.950 [2024-12-16 10:02:12.449469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.209 [2024-12-16 10:02:12.585486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:14.209 [2024-12-16 10:02:12.643399] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:14.209 [2024-12-16 10:02:12.643543] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.209 [2024-12-16 10:02:12.643555] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.209 [2024-12-16 10:02:12.643563] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.209 [2024-12-16 10:02:12.643731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.209 [2024-12-16 10:02:12.643903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.209 [2024-12-16 10:02:12.644439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:14.209 [2024-12-16 10:02:12.644449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.209 10:02:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.209 10:02:12 -- common/autotest_common.sh@862 -- # return 0 00:15:14.209 10:02:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:14.209 10:02:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:14.209 10:02:12 -- common/autotest_common.sh@10 -- # set +x 00:15:14.209 10:02:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.209 10:02:12 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:14.209 10:02:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.209 10:02:12 -- common/autotest_common.sh@10 -- # set +x 00:15:14.209 10:02:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.209 10:02:12 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:14.209 10:02:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.209 10:02:12 -- common/autotest_common.sh@10 -- # set +x 00:15:14.209 10:02:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.209 10:02:12 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:14.209 10:02:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.209 10:02:12 -- common/autotest_common.sh@10 -- # set +x 00:15:14.209 [2024-12-16 10:02:12.821856] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.209 10:02:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.209 10:02:12 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:14.209 10:02:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.209 10:02:12 -- common/autotest_common.sh@10 -- # set +x 00:15:14.469 Malloc0 00:15:14.469 10:02:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:14.469 10:02:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.469 10:02:12 -- common/autotest_common.sh@10 -- # set +x 00:15:14.469 10:02:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:14.469 10:02:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.469 10:02:12 -- common/autotest_common.sh@10 -- # set +x 00:15:14.469 10:02:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.469 10:02:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.469 10:02:12 -- common/autotest_common.sh@10 -- # set +x 00:15:14.469 [2024-12-16 10:02:12.884451] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.469 10:02:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=84926 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@30 -- # READ_PID=84928 00:15:14.469 10:02:12 -- nvmf/common.sh@520 -- # config=() 00:15:14.469 10:02:12 -- nvmf/common.sh@520 -- # local subsystem config 00:15:14.469 10:02:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:14.469 10:02:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:14.469 { 00:15:14.469 "params": { 00:15:14.469 "name": "Nvme$subsystem", 00:15:14.469 "trtype": "$TEST_TRANSPORT", 00:15:14.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.469 "adrfam": "ipv4", 00:15:14.469 "trsvcid": "$NVMF_PORT", 00:15:14.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.469 "hdgst": ${hdgst:-false}, 00:15:14.469 "ddgst": ${ddgst:-false} 00:15:14.469 }, 00:15:14.469 "method": "bdev_nvme_attach_controller" 00:15:14.469 } 00:15:14.469 EOF 00:15:14.469 )") 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:14.469 10:02:12 -- nvmf/common.sh@520 -- # config=() 00:15:14.469 10:02:12 -- nvmf/common.sh@520 -- # local subsystem config 00:15:14.469 10:02:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=84930 00:15:14.469 10:02:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:14.469 { 00:15:14.469 "params": { 00:15:14.469 "name": "Nvme$subsystem", 00:15:14.469 "trtype": "$TEST_TRANSPORT", 00:15:14.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.469 "adrfam": "ipv4", 00:15:14.469 "trsvcid": "$NVMF_PORT", 00:15:14.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.469 "hdgst": ${hdgst:-false}, 00:15:14.469 "ddgst": ${ddgst:-false} 00:15:14.469 }, 00:15:14.469 "method": "bdev_nvme_attach_controller" 00:15:14.469 } 00:15:14.469 EOF 00:15:14.469 )") 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=84933 00:15:14.469 10:02:12 -- nvmf/common.sh@542 -- # cat 00:15:14.469 10:02:12 -- nvmf/common.sh@542 -- # cat 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@35 -- # sync 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:14.469 10:02:12 -- nvmf/common.sh@520 -- # config=() 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:14.469 10:02:12 -- nvmf/common.sh@520 -- # local subsystem config 00:15:14.469 10:02:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:14.469 10:02:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:14.469 { 00:15:14.469 "params": { 00:15:14.469 "name": "Nvme$subsystem", 00:15:14.469 "trtype": "$TEST_TRANSPORT", 00:15:14.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.469 "adrfam": "ipv4", 00:15:14.469 "trsvcid": "$NVMF_PORT", 00:15:14.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.469 "hdgst": ${hdgst:-false}, 00:15:14.469 "ddgst": ${ddgst:-false} 00:15:14.469 }, 00:15:14.469 "method": "bdev_nvme_attach_controller" 00:15:14.469 } 00:15:14.469 EOF 00:15:14.469 )") 00:15:14.469 10:02:12 -- nvmf/common.sh@544 -- # jq . 00:15:14.469 10:02:12 -- nvmf/common.sh@544 -- # jq . 00:15:14.469 10:02:12 -- nvmf/common.sh@542 -- # cat 00:15:14.469 10:02:12 -- nvmf/common.sh@545 -- # IFS=, 00:15:14.469 10:02:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:14.469 "params": { 00:15:14.469 "name": "Nvme1", 00:15:14.469 "trtype": "tcp", 00:15:14.469 "traddr": "10.0.0.2", 00:15:14.469 "adrfam": "ipv4", 00:15:14.469 "trsvcid": "4420", 00:15:14.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.469 "hdgst": false, 00:15:14.469 "ddgst": false 00:15:14.469 }, 00:15:14.469 "method": "bdev_nvme_attach_controller" 00:15:14.469 }' 00:15:14.469 10:02:12 -- nvmf/common.sh@545 -- # IFS=, 00:15:14.469 10:02:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:14.469 "params": { 00:15:14.469 "name": "Nvme1", 00:15:14.469 "trtype": "tcp", 00:15:14.469 "traddr": "10.0.0.2", 00:15:14.469 "adrfam": "ipv4", 00:15:14.469 "trsvcid": "4420", 00:15:14.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.469 "hdgst": false, 00:15:14.469 "ddgst": false 00:15:14.469 }, 00:15:14.469 "method": "bdev_nvme_attach_controller" 00:15:14.469 }' 00:15:14.469 10:02:12 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:14.469 10:02:12 -- nvmf/common.sh@520 -- # config=() 00:15:14.469 10:02:12 -- nvmf/common.sh@520 -- # local subsystem config 00:15:14.469 10:02:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:14.469 10:02:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:14.469 { 00:15:14.469 "params": { 00:15:14.469 "name": "Nvme$subsystem", 00:15:14.469 "trtype": "$TEST_TRANSPORT", 00:15:14.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.469 "adrfam": "ipv4", 00:15:14.469 "trsvcid": "$NVMF_PORT", 00:15:14.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.469 "hdgst": ${hdgst:-false}, 00:15:14.469 "ddgst": ${ddgst:-false} 00:15:14.469 }, 00:15:14.469 "method": "bdev_nvme_attach_controller" 00:15:14.469 } 00:15:14.469 EOF 00:15:14.469 )") 00:15:14.469 10:02:12 -- nvmf/common.sh@542 -- # cat 00:15:14.469 10:02:12 -- nvmf/common.sh@544 -- # jq . 00:15:14.469 10:02:12 -- nvmf/common.sh@545 -- # IFS=, 00:15:14.469 10:02:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:14.469 "params": { 00:15:14.469 "name": "Nvme1", 00:15:14.469 "trtype": "tcp", 00:15:14.469 "traddr": "10.0.0.2", 00:15:14.469 "adrfam": "ipv4", 00:15:14.469 "trsvcid": "4420", 00:15:14.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.469 "hdgst": false, 00:15:14.469 "ddgst": false 00:15:14.469 }, 00:15:14.469 "method": "bdev_nvme_attach_controller" 00:15:14.469 }' 00:15:14.469 10:02:12 -- nvmf/common.sh@544 -- # jq . 00:15:14.469 10:02:12 -- nvmf/common.sh@545 -- # IFS=, 00:15:14.469 10:02:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:14.469 "params": { 00:15:14.469 "name": "Nvme1", 00:15:14.469 "trtype": "tcp", 00:15:14.469 "traddr": "10.0.0.2", 00:15:14.469 "adrfam": "ipv4", 00:15:14.469 "trsvcid": "4420", 00:15:14.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.469 "hdgst": false, 00:15:14.469 "ddgst": false 00:15:14.469 }, 00:15:14.469 "method": "bdev_nvme_attach_controller" 00:15:14.469 }' 00:15:14.469 [2024-12-16 10:02:12.947634] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:14.470 [2024-12-16 10:02:12.947717] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:14.470 10:02:12 -- target/bdev_io_wait.sh@37 -- # wait 84926 00:15:14.470 [2024-12-16 10:02:12.963319] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:14.470 [2024-12-16 10:02:12.963410] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:14.470 [2024-12-16 10:02:12.968283] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:14.470 [2024-12-16 10:02:12.968374] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:14.470 [2024-12-16 10:02:12.977383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:14.470 [2024-12-16 10:02:12.977455] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:14.728 [2024-12-16 10:02:13.163638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.728 [2024-12-16 10:02:13.229483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:14.728 [2024-12-16 10:02:13.236742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.728 [2024-12-16 10:02:13.302804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:14.728 [2024-12-16 10:02:13.306327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.987 [2024-12-16 10:02:13.370340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:14.987 [2024-12-16 10:02:13.383864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.987 Running I/O for 1 seconds... 00:15:14.987 Running I/O for 1 seconds... 00:15:14.987 [2024-12-16 10:02:13.451117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:14.987 Running I/O for 1 seconds... 00:15:14.987 Running I/O for 1 seconds... 00:15:15.923 00:15:15.923 Latency(us) 00:15:15.923 [2024-12-16T10:02:14.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.923 [2024-12-16T10:02:14.548Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:15.923 Nvme1n1 : 1.00 197973.60 773.33 0.00 0.00 644.26 247.62 934.63 00:15:15.923 [2024-12-16T10:02:14.548Z] =================================================================================================================== 00:15:15.923 [2024-12-16T10:02:14.548Z] Total : 197973.60 773.33 0.00 0.00 644.26 247.62 934.63 00:15:15.923 00:15:15.923 Latency(us) 00:15:15.923 [2024-12-16T10:02:14.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.923 [2024-12-16T10:02:14.548Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:15.923 Nvme1n1 : 1.01 10784.31 42.13 0.00 0.00 11826.03 4885.41 16920.20 00:15:15.923 [2024-12-16T10:02:14.548Z] =================================================================================================================== 00:15:15.923 [2024-12-16T10:02:14.548Z] Total : 10784.31 42.13 0.00 0.00 11826.03 4885.41 16920.20 00:15:15.923 00:15:15.923 Latency(us) 00:15:15.923 [2024-12-16T10:02:14.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.923 [2024-12-16T10:02:14.549Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:15.924 Nvme1n1 : 1.01 8209.73 32.07 0.00 0.00 15510.24 9711.24 26452.71 00:15:15.924 [2024-12-16T10:02:14.549Z] =================================================================================================================== 00:15:15.924 [2024-12-16T10:02:14.549Z] Total : 8209.73 32.07 0.00 0.00 15510.24 9711.24 26452.71 00:15:16.182 00:15:16.182 Latency(us) 00:15:16.182 [2024-12-16T10:02:14.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.182 [2024-12-16T10:02:14.807Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:16.182 Nvme1n1 : 1.01 9234.41 36.07 0.00 0.00 13807.87 7089.80 25380.31 00:15:16.182 [2024-12-16T10:02:14.807Z] =================================================================================================================== 00:15:16.182 [2024-12-16T10:02:14.807Z] Total : 9234.41 36.07 0.00 0.00 13807.87 7089.80 25380.31 00:15:16.182 10:02:14 -- target/bdev_io_wait.sh@38 -- # wait 84928 00:15:16.182 10:02:14 -- target/bdev_io_wait.sh@39 -- # wait 84930 00:15:16.441 10:02:14 -- target/bdev_io_wait.sh@40 -- # wait 84933 00:15:16.441 10:02:14 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.441 10:02:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.441 10:02:14 -- common/autotest_common.sh@10 -- # set +x 00:15:16.441 10:02:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.441 10:02:14 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:16.441 10:02:14 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:16.441 10:02:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:16.441 10:02:14 -- nvmf/common.sh@116 -- # sync 00:15:16.441 10:02:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:16.441 10:02:14 -- nvmf/common.sh@119 -- # set +e 00:15:16.441 10:02:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:16.441 10:02:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:16.441 rmmod nvme_tcp 00:15:16.441 rmmod nvme_fabrics 00:15:16.441 rmmod nvme_keyring 00:15:16.441 10:02:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:16.441 10:02:14 -- nvmf/common.sh@123 -- # set -e 00:15:16.441 10:02:14 -- nvmf/common.sh@124 -- # return 0 00:15:16.441 10:02:14 -- nvmf/common.sh@477 -- # '[' -n 84886 ']' 00:15:16.441 10:02:14 -- nvmf/common.sh@478 -- # killprocess 84886 00:15:16.441 10:02:14 -- common/autotest_common.sh@936 -- # '[' -z 84886 ']' 00:15:16.441 10:02:14 -- common/autotest_common.sh@940 -- # kill -0 84886 00:15:16.441 10:02:14 -- common/autotest_common.sh@941 -- # uname 00:15:16.441 10:02:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:16.441 10:02:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84886 00:15:16.441 10:02:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:16.441 10:02:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:16.441 killing process with pid 84886 00:15:16.441 10:02:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84886' 00:15:16.441 10:02:15 -- common/autotest_common.sh@955 -- # kill 84886 00:15:16.441 10:02:15 -- common/autotest_common.sh@960 -- # wait 84886 00:15:16.699 10:02:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:16.699 10:02:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:16.699 10:02:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:16.699 10:02:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.699 10:02:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:16.699 10:02:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.699 10:02:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.699 10:02:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.699 10:02:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:16.699 00:15:16.699 real 0m3.339s 00:15:16.699 user 0m14.792s 00:15:16.699 sys 0m2.074s 00:15:16.699 10:02:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:16.699 10:02:15 -- common/autotest_common.sh@10 -- # set +x 00:15:16.699 ************************************ 00:15:16.699 END TEST nvmf_bdev_io_wait 00:15:16.699 ************************************ 00:15:16.699 10:02:15 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:16.699 10:02:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:16.699 10:02:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:16.699 10:02:15 -- common/autotest_common.sh@10 -- # set +x 00:15:16.699 ************************************ 00:15:16.699 START TEST nvmf_queue_depth 00:15:16.699 ************************************ 00:15:16.699 10:02:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:16.958 * Looking for test storage... 00:15:16.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:16.958 10:02:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:16.958 10:02:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:16.958 10:02:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:16.958 10:02:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:16.958 10:02:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:16.958 10:02:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:16.958 10:02:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:16.958 10:02:15 -- scripts/common.sh@335 -- # IFS=.-: 00:15:16.958 10:02:15 -- scripts/common.sh@335 -- # read -ra ver1 00:15:16.958 10:02:15 -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.958 10:02:15 -- scripts/common.sh@336 -- # read -ra ver2 00:15:16.958 10:02:15 -- scripts/common.sh@337 -- # local 'op=<' 00:15:16.958 10:02:15 -- scripts/common.sh@339 -- # ver1_l=2 00:15:16.958 10:02:15 -- scripts/common.sh@340 -- # ver2_l=1 00:15:16.958 10:02:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:16.958 10:02:15 -- scripts/common.sh@343 -- # case "$op" in 00:15:16.958 10:02:15 -- scripts/common.sh@344 -- # : 1 00:15:16.958 10:02:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:16.958 10:02:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.958 10:02:15 -- scripts/common.sh@364 -- # decimal 1 00:15:16.958 10:02:15 -- scripts/common.sh@352 -- # local d=1 00:15:16.958 10:02:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.958 10:02:15 -- scripts/common.sh@354 -- # echo 1 00:15:16.958 10:02:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:16.958 10:02:15 -- scripts/common.sh@365 -- # decimal 2 00:15:16.958 10:02:15 -- scripts/common.sh@352 -- # local d=2 00:15:16.958 10:02:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.958 10:02:15 -- scripts/common.sh@354 -- # echo 2 00:15:16.958 10:02:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:16.958 10:02:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:16.958 10:02:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:16.958 10:02:15 -- scripts/common.sh@367 -- # return 0 00:15:16.958 10:02:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.958 10:02:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:16.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.959 --rc genhtml_branch_coverage=1 00:15:16.959 --rc genhtml_function_coverage=1 00:15:16.959 --rc genhtml_legend=1 00:15:16.959 --rc geninfo_all_blocks=1 00:15:16.959 --rc geninfo_unexecuted_blocks=1 00:15:16.959 00:15:16.959 ' 00:15:16.959 10:02:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:16.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.959 --rc genhtml_branch_coverage=1 00:15:16.959 --rc genhtml_function_coverage=1 00:15:16.959 --rc genhtml_legend=1 00:15:16.959 --rc geninfo_all_blocks=1 00:15:16.959 --rc geninfo_unexecuted_blocks=1 00:15:16.959 00:15:16.959 ' 00:15:16.959 10:02:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:16.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.959 --rc genhtml_branch_coverage=1 00:15:16.959 --rc genhtml_function_coverage=1 00:15:16.959 --rc genhtml_legend=1 00:15:16.959 --rc geninfo_all_blocks=1 00:15:16.959 --rc geninfo_unexecuted_blocks=1 00:15:16.959 00:15:16.959 ' 00:15:16.959 10:02:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:16.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.959 --rc genhtml_branch_coverage=1 00:15:16.959 --rc genhtml_function_coverage=1 00:15:16.959 --rc genhtml_legend=1 00:15:16.959 --rc geninfo_all_blocks=1 00:15:16.959 --rc geninfo_unexecuted_blocks=1 00:15:16.959 00:15:16.959 ' 00:15:16.959 10:02:15 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:16.959 10:02:15 -- nvmf/common.sh@7 -- # uname -s 00:15:16.959 10:02:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.959 10:02:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.959 10:02:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.959 10:02:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.959 10:02:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.959 10:02:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.959 10:02:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.959 10:02:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.959 10:02:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.959 10:02:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.959 10:02:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:15:16.959 10:02:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:15:16.959 10:02:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.959 10:02:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.959 10:02:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:16.959 10:02:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.959 10:02:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.959 10:02:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.959 10:02:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.959 10:02:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.959 10:02:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.959 10:02:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.959 10:02:15 -- paths/export.sh@5 -- # export PATH 00:15:16.959 10:02:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.959 10:02:15 -- nvmf/common.sh@46 -- # : 0 00:15:16.959 10:02:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:16.959 10:02:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:16.959 10:02:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:16.959 10:02:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.959 10:02:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.959 10:02:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:16.959 10:02:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:16.959 10:02:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:16.959 10:02:15 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:16.959 10:02:15 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:16.959 10:02:15 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:16.959 10:02:15 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:16.959 10:02:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:16.959 10:02:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.959 10:02:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:16.959 10:02:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:16.959 10:02:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:16.959 10:02:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.959 10:02:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.959 10:02:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.959 10:02:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:16.959 10:02:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:16.959 10:02:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:16.959 10:02:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:16.959 10:02:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:16.959 10:02:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:16.959 10:02:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.959 10:02:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.959 10:02:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:16.959 10:02:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:16.959 10:02:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.959 10:02:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.959 10:02:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.959 10:02:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.959 10:02:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.959 10:02:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.959 10:02:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.959 10:02:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.959 10:02:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:16.959 10:02:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:16.959 Cannot find device "nvmf_tgt_br" 00:15:16.959 10:02:15 -- nvmf/common.sh@154 -- # true 00:15:16.959 10:02:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.959 Cannot find device "nvmf_tgt_br2" 00:15:16.959 10:02:15 -- nvmf/common.sh@155 -- # true 00:15:16.959 10:02:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:16.959 10:02:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:16.959 Cannot find device "nvmf_tgt_br" 00:15:16.959 10:02:15 -- nvmf/common.sh@157 -- # true 00:15:16.959 10:02:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:16.959 Cannot find device "nvmf_tgt_br2" 00:15:16.959 10:02:15 -- nvmf/common.sh@158 -- # true 00:15:16.959 10:02:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:17.218 10:02:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:17.218 10:02:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.218 10:02:15 -- nvmf/common.sh@161 -- # true 00:15:17.218 10:02:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.218 10:02:15 -- nvmf/common.sh@162 -- # true 00:15:17.218 10:02:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:17.218 10:02:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:17.218 10:02:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:17.218 10:02:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:17.218 10:02:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:17.218 10:02:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:17.218 10:02:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:17.218 10:02:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:17.218 10:02:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:17.218 10:02:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:17.218 10:02:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:17.218 10:02:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:17.218 10:02:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:17.218 10:02:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.218 10:02:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.218 10:02:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.218 10:02:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:17.218 10:02:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:17.218 10:02:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.218 10:02:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.218 10:02:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.218 10:02:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.218 10:02:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.218 10:02:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:17.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:17.218 00:15:17.218 --- 10.0.0.2 ping statistics --- 00:15:17.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.219 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:17.219 10:02:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:17.219 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.219 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:15:17.219 00:15:17.219 --- 10.0.0.3 ping statistics --- 00:15:17.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.219 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:17.219 10:02:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:15:17.219 00:15:17.219 --- 10.0.0.1 ping statistics --- 00:15:17.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.219 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:17.219 10:02:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.219 10:02:15 -- nvmf/common.sh@421 -- # return 0 00:15:17.219 10:02:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:17.219 10:02:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.219 10:02:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:17.219 10:02:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:17.219 10:02:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.219 10:02:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:17.219 10:02:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:17.478 10:02:15 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:17.478 10:02:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:17.478 10:02:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.478 10:02:15 -- common/autotest_common.sh@10 -- # set +x 00:15:17.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.478 10:02:15 -- nvmf/common.sh@469 -- # nvmfpid=85153 00:15:17.478 10:02:15 -- nvmf/common.sh@470 -- # waitforlisten 85153 00:15:17.478 10:02:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:17.478 10:02:15 -- common/autotest_common.sh@829 -- # '[' -z 85153 ']' 00:15:17.478 10:02:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.478 10:02:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.478 10:02:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.478 10:02:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.478 10:02:15 -- common/autotest_common.sh@10 -- # set +x 00:15:17.478 [2024-12-16 10:02:15.913214] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:17.478 [2024-12-16 10:02:15.913306] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.478 [2024-12-16 10:02:16.054854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.736 [2024-12-16 10:02:16.110899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:17.736 [2024-12-16 10:02:16.111036] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.736 [2024-12-16 10:02:16.111050] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.736 [2024-12-16 10:02:16.111057] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.736 [2024-12-16 10:02:16.111087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.304 10:02:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.304 10:02:16 -- common/autotest_common.sh@862 -- # return 0 00:15:18.304 10:02:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:18.304 10:02:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:18.304 10:02:16 -- common/autotest_common.sh@10 -- # set +x 00:15:18.304 10:02:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.304 10:02:16 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:18.304 10:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.304 10:02:16 -- common/autotest_common.sh@10 -- # set +x 00:15:18.304 [2024-12-16 10:02:16.863577] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.304 10:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.304 10:02:16 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:18.304 10:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.304 10:02:16 -- common/autotest_common.sh@10 -- # set +x 00:15:18.304 Malloc0 00:15:18.304 10:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.304 10:02:16 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:18.304 10:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.304 10:02:16 -- common/autotest_common.sh@10 -- # set +x 00:15:18.304 10:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.304 10:02:16 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:18.304 10:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.304 10:02:16 -- common/autotest_common.sh@10 -- # set +x 00:15:18.304 10:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.304 10:02:16 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.304 10:02:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.304 10:02:16 -- common/autotest_common.sh@10 -- # set +x 00:15:18.304 [2024-12-16 10:02:16.927136] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:18.563 10:02:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.563 10:02:16 -- target/queue_depth.sh@30 -- # bdevperf_pid=85203 00:15:18.563 10:02:16 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:18.563 10:02:16 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:18.563 10:02:16 -- target/queue_depth.sh@33 -- # waitforlisten 85203 /var/tmp/bdevperf.sock 00:15:18.563 10:02:16 -- common/autotest_common.sh@829 -- # '[' -z 85203 ']' 00:15:18.563 10:02:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:18.563 10:02:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.563 10:02:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:18.563 10:02:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.563 10:02:16 -- common/autotest_common.sh@10 -- # set +x 00:15:18.563 [2024-12-16 10:02:16.975053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:18.563 [2024-12-16 10:02:16.975473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85203 ] 00:15:18.563 [2024-12-16 10:02:17.112685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.563 [2024-12-16 10:02:17.177682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.499 10:02:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.499 10:02:17 -- common/autotest_common.sh@862 -- # return 0 00:15:19.499 10:02:17 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:19.499 10:02:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.499 10:02:17 -- common/autotest_common.sh@10 -- # set +x 00:15:19.499 NVMe0n1 00:15:19.499 10:02:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.499 10:02:18 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:19.757 Running I/O for 10 seconds... 00:15:29.733 00:15:29.733 Latency(us) 00:15:29.733 [2024-12-16T10:02:28.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.733 [2024-12-16T10:02:28.358Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:29.733 Verification LBA range: start 0x0 length 0x4000 00:15:29.733 NVMe0n1 : 10.05 16900.64 66.02 0.00 0.00 60394.62 10485.76 49807.36 00:15:29.733 [2024-12-16T10:02:28.358Z] =================================================================================================================== 00:15:29.733 [2024-12-16T10:02:28.358Z] Total : 16900.64 66.02 0.00 0.00 60394.62 10485.76 49807.36 00:15:29.733 0 00:15:29.733 10:02:28 -- target/queue_depth.sh@39 -- # killprocess 85203 00:15:29.733 10:02:28 -- common/autotest_common.sh@936 -- # '[' -z 85203 ']' 00:15:29.733 10:02:28 -- common/autotest_common.sh@940 -- # kill -0 85203 00:15:29.733 10:02:28 -- common/autotest_common.sh@941 -- # uname 00:15:29.733 10:02:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:29.733 10:02:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85203 00:15:29.733 killing process with pid 85203 00:15:29.733 Received shutdown signal, test time was about 10.000000 seconds 00:15:29.733 00:15:29.733 Latency(us) 00:15:29.733 [2024-12-16T10:02:28.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.733 [2024-12-16T10:02:28.358Z] =================================================================================================================== 00:15:29.733 [2024-12-16T10:02:28.358Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:29.733 10:02:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:29.733 10:02:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:29.733 10:02:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85203' 00:15:29.733 10:02:28 -- common/autotest_common.sh@955 -- # kill 85203 00:15:29.733 10:02:28 -- common/autotest_common.sh@960 -- # wait 85203 00:15:29.992 10:02:28 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:29.992 10:02:28 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:29.992 10:02:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:29.992 10:02:28 -- nvmf/common.sh@116 -- # sync 00:15:29.992 10:02:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:29.992 10:02:28 -- nvmf/common.sh@119 -- # set +e 00:15:29.992 10:02:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:29.992 10:02:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:29.992 rmmod nvme_tcp 00:15:29.992 rmmod nvme_fabrics 00:15:29.992 rmmod nvme_keyring 00:15:29.992 10:02:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:29.992 10:02:28 -- nvmf/common.sh@123 -- # set -e 00:15:29.992 10:02:28 -- nvmf/common.sh@124 -- # return 0 00:15:29.992 10:02:28 -- nvmf/common.sh@477 -- # '[' -n 85153 ']' 00:15:29.992 10:02:28 -- nvmf/common.sh@478 -- # killprocess 85153 00:15:29.992 10:02:28 -- common/autotest_common.sh@936 -- # '[' -z 85153 ']' 00:15:29.992 10:02:28 -- common/autotest_common.sh@940 -- # kill -0 85153 00:15:29.992 10:02:28 -- common/autotest_common.sh@941 -- # uname 00:15:29.992 10:02:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:29.992 10:02:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85153 00:15:30.251 killing process with pid 85153 00:15:30.251 10:02:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:30.251 10:02:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:30.251 10:02:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85153' 00:15:30.251 10:02:28 -- common/autotest_common.sh@955 -- # kill 85153 00:15:30.251 10:02:28 -- common/autotest_common.sh@960 -- # wait 85153 00:15:30.251 10:02:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:30.251 10:02:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:30.251 10:02:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:30.251 10:02:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.251 10:02:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:30.251 10:02:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.251 10:02:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.251 10:02:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.251 10:02:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:30.251 ************************************ 00:15:30.251 END TEST nvmf_queue_depth 00:15:30.251 ************************************ 00:15:30.251 00:15:30.251 real 0m13.598s 00:15:30.251 user 0m23.046s 00:15:30.251 sys 0m2.185s 00:15:30.251 10:02:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:30.251 10:02:28 -- common/autotest_common.sh@10 -- # set +x 00:15:30.510 10:02:28 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:30.510 10:02:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:30.510 10:02:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:30.510 10:02:28 -- common/autotest_common.sh@10 -- # set +x 00:15:30.510 ************************************ 00:15:30.510 START TEST nvmf_multipath 00:15:30.510 ************************************ 00:15:30.510 10:02:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:30.510 * Looking for test storage... 00:15:30.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:30.510 10:02:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:30.510 10:02:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:30.510 10:02:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:30.510 10:02:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:30.510 10:02:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:30.510 10:02:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:30.510 10:02:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:30.510 10:02:29 -- scripts/common.sh@335 -- # IFS=.-: 00:15:30.510 10:02:29 -- scripts/common.sh@335 -- # read -ra ver1 00:15:30.510 10:02:29 -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.510 10:02:29 -- scripts/common.sh@336 -- # read -ra ver2 00:15:30.510 10:02:29 -- scripts/common.sh@337 -- # local 'op=<' 00:15:30.510 10:02:29 -- scripts/common.sh@339 -- # ver1_l=2 00:15:30.510 10:02:29 -- scripts/common.sh@340 -- # ver2_l=1 00:15:30.510 10:02:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:30.510 10:02:29 -- scripts/common.sh@343 -- # case "$op" in 00:15:30.510 10:02:29 -- scripts/common.sh@344 -- # : 1 00:15:30.510 10:02:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:30.510 10:02:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.510 10:02:29 -- scripts/common.sh@364 -- # decimal 1 00:15:30.510 10:02:29 -- scripts/common.sh@352 -- # local d=1 00:15:30.510 10:02:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.510 10:02:29 -- scripts/common.sh@354 -- # echo 1 00:15:30.510 10:02:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:30.510 10:02:29 -- scripts/common.sh@365 -- # decimal 2 00:15:30.510 10:02:29 -- scripts/common.sh@352 -- # local d=2 00:15:30.510 10:02:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.510 10:02:29 -- scripts/common.sh@354 -- # echo 2 00:15:30.510 10:02:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:30.510 10:02:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:30.510 10:02:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:30.510 10:02:29 -- scripts/common.sh@367 -- # return 0 00:15:30.510 10:02:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.510 10:02:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:30.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.510 --rc genhtml_branch_coverage=1 00:15:30.510 --rc genhtml_function_coverage=1 00:15:30.510 --rc genhtml_legend=1 00:15:30.510 --rc geninfo_all_blocks=1 00:15:30.510 --rc geninfo_unexecuted_blocks=1 00:15:30.510 00:15:30.510 ' 00:15:30.510 10:02:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:30.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.510 --rc genhtml_branch_coverage=1 00:15:30.510 --rc genhtml_function_coverage=1 00:15:30.510 --rc genhtml_legend=1 00:15:30.510 --rc geninfo_all_blocks=1 00:15:30.510 --rc geninfo_unexecuted_blocks=1 00:15:30.510 00:15:30.510 ' 00:15:30.510 10:02:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:30.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.510 --rc genhtml_branch_coverage=1 00:15:30.510 --rc genhtml_function_coverage=1 00:15:30.510 --rc genhtml_legend=1 00:15:30.510 --rc geninfo_all_blocks=1 00:15:30.510 --rc geninfo_unexecuted_blocks=1 00:15:30.510 00:15:30.510 ' 00:15:30.510 10:02:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:30.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.510 --rc genhtml_branch_coverage=1 00:15:30.510 --rc genhtml_function_coverage=1 00:15:30.510 --rc genhtml_legend=1 00:15:30.510 --rc geninfo_all_blocks=1 00:15:30.510 --rc geninfo_unexecuted_blocks=1 00:15:30.510 00:15:30.510 ' 00:15:30.510 10:02:29 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:30.510 10:02:29 -- nvmf/common.sh@7 -- # uname -s 00:15:30.510 10:02:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.510 10:02:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.511 10:02:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.511 10:02:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.511 10:02:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.511 10:02:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.511 10:02:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.511 10:02:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.511 10:02:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.511 10:02:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.511 10:02:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:15:30.511 10:02:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:15:30.511 10:02:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.511 10:02:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.511 10:02:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:30.511 10:02:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.511 10:02:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.511 10:02:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.511 10:02:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.511 10:02:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.511 10:02:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.511 10:02:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.511 10:02:29 -- paths/export.sh@5 -- # export PATH 00:15:30.511 10:02:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.511 10:02:29 -- nvmf/common.sh@46 -- # : 0 00:15:30.511 10:02:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:30.511 10:02:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:30.511 10:02:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:30.511 10:02:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.511 10:02:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.511 10:02:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:30.511 10:02:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:30.511 10:02:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:30.511 10:02:29 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:30.511 10:02:29 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:30.511 10:02:29 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:30.511 10:02:29 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:30.511 10:02:29 -- target/multipath.sh@43 -- # nvmftestinit 00:15:30.511 10:02:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:30.511 10:02:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.511 10:02:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:30.511 10:02:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:30.511 10:02:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:30.511 10:02:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.511 10:02:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.511 10:02:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.511 10:02:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:30.511 10:02:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:30.511 10:02:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:30.511 10:02:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:30.511 10:02:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:30.511 10:02:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:30.511 10:02:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.511 10:02:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.511 10:02:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:30.511 10:02:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:30.511 10:02:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:30.511 10:02:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:30.511 10:02:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:30.511 10:02:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.511 10:02:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:30.511 10:02:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:30.511 10:02:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:30.511 10:02:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:30.511 10:02:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:30.770 10:02:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:30.770 Cannot find device "nvmf_tgt_br" 00:15:30.770 10:02:29 -- nvmf/common.sh@154 -- # true 00:15:30.770 10:02:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:30.770 Cannot find device "nvmf_tgt_br2" 00:15:30.770 10:02:29 -- nvmf/common.sh@155 -- # true 00:15:30.770 10:02:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:30.770 10:02:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:30.770 Cannot find device "nvmf_tgt_br" 00:15:30.770 10:02:29 -- nvmf/common.sh@157 -- # true 00:15:30.770 10:02:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:30.770 Cannot find device "nvmf_tgt_br2" 00:15:30.770 10:02:29 -- nvmf/common.sh@158 -- # true 00:15:30.770 10:02:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:30.770 10:02:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:30.770 10:02:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:30.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.770 10:02:29 -- nvmf/common.sh@161 -- # true 00:15:30.770 10:02:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:30.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:30.770 10:02:29 -- nvmf/common.sh@162 -- # true 00:15:30.770 10:02:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:30.770 10:02:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:30.770 10:02:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:30.770 10:02:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:30.770 10:02:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:30.770 10:02:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:30.770 10:02:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:30.770 10:02:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:30.770 10:02:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:30.770 10:02:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:30.770 10:02:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:30.770 10:02:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:30.770 10:02:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:30.770 10:02:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:30.770 10:02:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:30.770 10:02:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:30.770 10:02:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:30.770 10:02:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:30.770 10:02:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:31.029 10:02:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:31.029 10:02:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:31.029 10:02:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:31.029 10:02:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:31.029 10:02:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:31.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:31.029 00:15:31.029 --- 10.0.0.2 ping statistics --- 00:15:31.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.029 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:31.029 10:02:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:31.029 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:31.029 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:15:31.029 00:15:31.029 --- 10.0.0.3 ping statistics --- 00:15:31.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.029 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:31.029 10:02:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:31.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:31.029 00:15:31.029 --- 10.0.0.1 ping statistics --- 00:15:31.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.029 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:31.029 10:02:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.029 10:02:29 -- nvmf/common.sh@421 -- # return 0 00:15:31.029 10:02:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:31.029 10:02:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.029 10:02:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:31.029 10:02:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:31.029 10:02:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.029 10:02:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:31.029 10:02:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:31.029 10:02:29 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:31.029 10:02:29 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:31.029 10:02:29 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:31.029 10:02:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:31.029 10:02:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:31.029 10:02:29 -- common/autotest_common.sh@10 -- # set +x 00:15:31.029 10:02:29 -- nvmf/common.sh@469 -- # nvmfpid=85543 00:15:31.029 10:02:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:31.029 10:02:29 -- nvmf/common.sh@470 -- # waitforlisten 85543 00:15:31.029 10:02:29 -- common/autotest_common.sh@829 -- # '[' -z 85543 ']' 00:15:31.029 10:02:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.029 10:02:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.029 10:02:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.029 10:02:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.029 10:02:29 -- common/autotest_common.sh@10 -- # set +x 00:15:31.029 [2024-12-16 10:02:29.527035] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:31.029 [2024-12-16 10:02:29.527124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.288 [2024-12-16 10:02:29.671176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:31.288 [2024-12-16 10:02:29.739278] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:31.288 [2024-12-16 10:02:29.739465] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.288 [2024-12-16 10:02:29.739483] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.288 [2024-12-16 10:02:29.739495] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.288 [2024-12-16 10:02:29.739611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.288 [2024-12-16 10:02:29.740174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.288 [2024-12-16 10:02:29.740417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.288 [2024-12-16 10:02:29.740425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.222 10:02:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.222 10:02:30 -- common/autotest_common.sh@862 -- # return 0 00:15:32.222 10:02:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:32.222 10:02:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:32.222 10:02:30 -- common/autotest_common.sh@10 -- # set +x 00:15:32.222 10:02:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.222 10:02:30 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:32.222 [2024-12-16 10:02:30.780603] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.222 10:02:30 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:32.480 Malloc0 00:15:32.480 10:02:31 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:32.738 10:02:31 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:32.997 10:02:31 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.255 [2024-12-16 10:02:31.749154] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.255 10:02:31 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:33.515 [2024-12-16 10:02:31.977350] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:33.515 10:02:31 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:33.787 10:02:32 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:34.059 10:02:32 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:34.059 10:02:32 -- common/autotest_common.sh@1187 -- # local i=0 00:15:34.059 10:02:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:34.059 10:02:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:34.059 10:02:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:35.962 10:02:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:35.962 10:02:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:35.962 10:02:34 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:35.962 10:02:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:35.962 10:02:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:35.962 10:02:34 -- common/autotest_common.sh@1197 -- # return 0 00:15:35.962 10:02:34 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:35.962 10:02:34 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:35.962 10:02:34 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:35.962 10:02:34 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:35.962 10:02:34 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:35.962 10:02:34 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:35.962 10:02:34 -- target/multipath.sh@38 -- # return 0 00:15:35.962 10:02:34 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:35.962 10:02:34 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:35.962 10:02:34 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:35.962 10:02:34 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:35.962 10:02:34 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:35.962 10:02:34 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:35.962 10:02:34 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:35.962 10:02:34 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:35.962 10:02:34 -- target/multipath.sh@22 -- # local timeout=20 00:15:35.962 10:02:34 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:35.962 10:02:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:35.962 10:02:34 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:35.962 10:02:34 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:35.962 10:02:34 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:35.962 10:02:34 -- target/multipath.sh@22 -- # local timeout=20 00:15:35.962 10:02:34 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:35.962 10:02:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:35.962 10:02:34 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:35.962 10:02:34 -- target/multipath.sh@85 -- # echo numa 00:15:35.962 10:02:34 -- target/multipath.sh@88 -- # fio_pid=85682 00:15:35.962 10:02:34 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:35.962 10:02:34 -- target/multipath.sh@90 -- # sleep 1 00:15:35.962 [global] 00:15:35.962 thread=1 00:15:35.962 invalidate=1 00:15:35.962 rw=randrw 00:15:35.962 time_based=1 00:15:35.962 runtime=6 00:15:35.962 ioengine=libaio 00:15:35.962 direct=1 00:15:35.962 bs=4096 00:15:35.962 iodepth=128 00:15:35.962 norandommap=0 00:15:35.962 numjobs=1 00:15:35.962 00:15:35.962 verify_dump=1 00:15:35.962 verify_backlog=512 00:15:35.962 verify_state_save=0 00:15:35.962 do_verify=1 00:15:35.962 verify=crc32c-intel 00:15:35.962 [job0] 00:15:35.962 filename=/dev/nvme0n1 00:15:35.962 Could not set queue depth (nvme0n1) 00:15:36.221 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:36.221 fio-3.35 00:15:36.221 Starting 1 thread 00:15:37.156 10:02:35 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:37.156 10:02:35 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:37.415 10:02:35 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:37.415 10:02:35 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:37.415 10:02:35 -- target/multipath.sh@22 -- # local timeout=20 00:15:37.415 10:02:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:37.415 10:02:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:37.415 10:02:35 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:37.415 10:02:35 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:37.415 10:02:35 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:37.415 10:02:35 -- target/multipath.sh@22 -- # local timeout=20 00:15:37.415 10:02:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:37.415 10:02:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:37.415 10:02:35 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:37.415 10:02:35 -- target/multipath.sh@25 -- # sleep 1s 00:15:38.790 10:02:36 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:38.790 10:02:36 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:38.790 10:02:36 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:38.790 10:02:36 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:38.790 10:02:37 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:39.048 10:02:37 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:39.048 10:02:37 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:39.048 10:02:37 -- target/multipath.sh@22 -- # local timeout=20 00:15:39.048 10:02:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:39.048 10:02:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:39.048 10:02:37 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:39.048 10:02:37 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:39.048 10:02:37 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:39.048 10:02:37 -- target/multipath.sh@22 -- # local timeout=20 00:15:39.048 10:02:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:39.048 10:02:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:39.048 10:02:37 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:39.048 10:02:37 -- target/multipath.sh@25 -- # sleep 1s 00:15:39.983 10:02:38 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:39.983 10:02:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:39.983 10:02:38 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:39.983 10:02:38 -- target/multipath.sh@104 -- # wait 85682 00:15:42.515 00:15:42.515 job0: (groupid=0, jobs=1): err= 0: pid=85703: Mon Dec 16 10:02:40 2024 00:15:42.515 read: IOPS=12.4k, BW=48.4MiB/s (50.8MB/s)(291MiB/6003msec) 00:15:42.515 slat (usec): min=2, max=4714, avg=45.71, stdev=202.48 00:15:42.515 clat (usec): min=418, max=14618, avg=7025.65, stdev=1106.23 00:15:42.515 lat (usec): min=442, max=14826, avg=7071.36, stdev=1114.32 00:15:42.515 clat percentiles (usec): 00:15:42.515 | 1.00th=[ 4228], 5.00th=[ 5538], 10.00th=[ 5932], 20.00th=[ 6194], 00:15:42.515 | 30.00th=[ 6390], 40.00th=[ 6652], 50.00th=[ 6915], 60.00th=[ 7242], 00:15:42.515 | 70.00th=[ 7504], 80.00th=[ 7832], 90.00th=[ 8291], 95.00th=[ 8848], 00:15:42.515 | 99.00th=[10421], 99.50th=[10814], 99.90th=[12649], 99.95th=[13566], 00:15:42.515 | 99.99th=[14484] 00:15:42.515 bw ( KiB/s): min=12952, max=33608, per=53.85%, avg=26706.18, stdev=6827.61, samples=11 00:15:42.515 iops : min= 3238, max= 8402, avg=6676.55, stdev=1706.90, samples=11 00:15:42.515 write: IOPS=7449, BW=29.1MiB/s (30.5MB/s)(152MiB/5225msec); 0 zone resets 00:15:42.515 slat (usec): min=9, max=2036, avg=57.28, stdev=139.32 00:15:42.515 clat (usec): min=700, max=16535, avg=6128.00, stdev=911.86 00:15:42.515 lat (usec): min=1058, max=16560, avg=6185.28, stdev=914.78 00:15:42.515 clat percentiles (usec): 00:15:42.515 | 1.00th=[ 3458], 5.00th=[ 4490], 10.00th=[ 5211], 20.00th=[ 5604], 00:15:42.515 | 30.00th=[ 5800], 40.00th=[ 5997], 50.00th=[ 6194], 60.00th=[ 6325], 00:15:42.515 | 70.00th=[ 6521], 80.00th=[ 6718], 90.00th=[ 6980], 95.00th=[ 7308], 00:15:42.515 | 99.00th=[ 8848], 99.50th=[ 9503], 99.90th=[10814], 99.95th=[11731], 00:15:42.515 | 99.99th=[14746] 00:15:42.515 bw ( KiB/s): min=13424, max=33072, per=89.61%, avg=26701.82, stdev=6327.16, samples=11 00:15:42.515 iops : min= 3356, max= 8268, avg=6675.45, stdev=1581.79, samples=11 00:15:42.515 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:42.515 lat (msec) : 2=0.06%, 4=1.45%, 10=97.26%, 20=1.22% 00:15:42.515 cpu : usr=6.18%, sys=23.92%, ctx=7158, majf=0, minf=114 00:15:42.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:42.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:42.515 issued rwts: total=74422,38924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:42.515 00:15:42.515 Run status group 0 (all jobs): 00:15:42.515 READ: bw=48.4MiB/s (50.8MB/s), 48.4MiB/s-48.4MiB/s (50.8MB/s-50.8MB/s), io=291MiB (305MB), run=6003-6003msec 00:15:42.515 WRITE: bw=29.1MiB/s (30.5MB/s), 29.1MiB/s-29.1MiB/s (30.5MB/s-30.5MB/s), io=152MiB (159MB), run=5225-5225msec 00:15:42.515 00:15:42.515 Disk stats (read/write): 00:15:42.515 nvme0n1: ios=72636/38924, merge=0/0, ticks=477637/222364, in_queue=700001, util=98.58% 00:15:42.515 10:02:40 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:42.515 10:02:41 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:42.774 10:02:41 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:42.774 10:02:41 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:42.774 10:02:41 -- target/multipath.sh@22 -- # local timeout=20 00:15:42.774 10:02:41 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:42.774 10:02:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:42.774 10:02:41 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:42.774 10:02:41 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:42.774 10:02:41 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:42.774 10:02:41 -- target/multipath.sh@22 -- # local timeout=20 00:15:42.774 10:02:41 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:42.774 10:02:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:42.774 10:02:41 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:42.774 10:02:41 -- target/multipath.sh@25 -- # sleep 1s 00:15:43.708 10:02:42 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:43.708 10:02:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:43.708 10:02:42 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:43.708 10:02:42 -- target/multipath.sh@113 -- # echo round-robin 00:15:43.708 10:02:42 -- target/multipath.sh@116 -- # fio_pid=85838 00:15:43.709 10:02:42 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:43.709 10:02:42 -- target/multipath.sh@118 -- # sleep 1 00:15:43.709 [global] 00:15:43.709 thread=1 00:15:43.709 invalidate=1 00:15:43.709 rw=randrw 00:15:43.709 time_based=1 00:15:43.709 runtime=6 00:15:43.709 ioengine=libaio 00:15:43.709 direct=1 00:15:43.709 bs=4096 00:15:43.709 iodepth=128 00:15:43.709 norandommap=0 00:15:43.709 numjobs=1 00:15:43.709 00:15:43.709 verify_dump=1 00:15:43.709 verify_backlog=512 00:15:43.709 verify_state_save=0 00:15:43.709 do_verify=1 00:15:43.709 verify=crc32c-intel 00:15:43.709 [job0] 00:15:43.709 filename=/dev/nvme0n1 00:15:43.967 Could not set queue depth (nvme0n1) 00:15:43.967 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:43.967 fio-3.35 00:15:43.967 Starting 1 thread 00:15:44.903 10:02:43 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:45.162 10:02:43 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:45.420 10:02:43 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:45.420 10:02:43 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:45.420 10:02:43 -- target/multipath.sh@22 -- # local timeout=20 00:15:45.420 10:02:43 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:45.420 10:02:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:45.420 10:02:43 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:45.420 10:02:43 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:45.420 10:02:43 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:45.420 10:02:43 -- target/multipath.sh@22 -- # local timeout=20 00:15:45.420 10:02:43 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:45.420 10:02:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:45.420 10:02:43 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:45.420 10:02:43 -- target/multipath.sh@25 -- # sleep 1s 00:15:46.356 10:02:44 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:46.356 10:02:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:46.357 10:02:44 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:46.357 10:02:44 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:46.618 10:02:45 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:46.878 10:02:45 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:46.878 10:02:45 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:46.878 10:02:45 -- target/multipath.sh@22 -- # local timeout=20 00:15:46.878 10:02:45 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:46.878 10:02:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:46.878 10:02:45 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:46.878 10:02:45 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:46.878 10:02:45 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:46.878 10:02:45 -- target/multipath.sh@22 -- # local timeout=20 00:15:46.878 10:02:45 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:46.878 10:02:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:46.878 10:02:45 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:46.878 10:02:45 -- target/multipath.sh@25 -- # sleep 1s 00:15:47.813 10:02:46 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:47.813 10:02:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:47.813 10:02:46 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:47.813 10:02:46 -- target/multipath.sh@132 -- # wait 85838 00:15:50.347 00:15:50.347 job0: (groupid=0, jobs=1): err= 0: pid=85859: Mon Dec 16 10:02:48 2024 00:15:50.347 read: IOPS=13.5k, BW=52.8MiB/s (55.4MB/s)(317MiB/6005msec) 00:15:50.347 slat (usec): min=3, max=7069, avg=38.36, stdev=181.24 00:15:50.347 clat (usec): min=278, max=13616, avg=6608.53, stdev=1361.95 00:15:50.347 lat (usec): min=290, max=13639, avg=6646.89, stdev=1375.79 00:15:50.347 clat percentiles (usec): 00:15:50.347 | 1.00th=[ 3326], 5.00th=[ 4228], 10.00th=[ 4752], 20.00th=[ 5604], 00:15:50.347 | 30.00th=[ 6128], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6915], 00:15:50.347 | 70.00th=[ 7242], 80.00th=[ 7635], 90.00th=[ 8094], 95.00th=[ 8586], 00:15:50.347 | 99.00th=[10421], 99.50th=[10945], 99.90th=[11600], 99.95th=[11863], 00:15:50.347 | 99.99th=[12518] 00:15:50.347 bw ( KiB/s): min=15880, max=45400, per=51.99%, avg=28108.18, stdev=9866.82, samples=11 00:15:50.347 iops : min= 3970, max=11350, avg=7027.00, stdev=2466.62, samples=11 00:15:50.347 write: IOPS=8119, BW=31.7MiB/s (33.3MB/s)(160MiB/5060msec); 0 zone resets 00:15:50.347 slat (usec): min=15, max=3337, avg=49.60, stdev=113.68 00:15:50.347 clat (usec): min=245, max=12235, avg=5483.91, stdev=1419.60 00:15:50.347 lat (usec): min=286, max=12260, avg=5533.52, stdev=1430.65 00:15:50.347 clat percentiles (usec): 00:15:50.347 | 1.00th=[ 2573], 5.00th=[ 3064], 10.00th=[ 3425], 20.00th=[ 3982], 00:15:50.347 | 30.00th=[ 4621], 40.00th=[ 5473], 50.00th=[ 5866], 60.00th=[ 6128], 00:15:50.347 | 70.00th=[ 6390], 80.00th=[ 6587], 90.00th=[ 6980], 95.00th=[ 7308], 00:15:50.347 | 99.00th=[ 8848], 99.50th=[ 9503], 99.90th=[10683], 99.95th=[11076], 00:15:50.347 | 99.99th=[12125] 00:15:50.347 bw ( KiB/s): min=16608, max=45149, per=86.52%, avg=28102.27, stdev=9563.65, samples=11 00:15:50.347 iops : min= 4152, max=11287, avg=7025.55, stdev=2390.87, samples=11 00:15:50.347 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:15:50.347 lat (msec) : 2=0.09%, 4=8.98%, 10=89.80%, 20=1.10% 00:15:50.347 cpu : usr=6.41%, sys=27.68%, ctx=8146, majf=0, minf=127 00:15:50.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:50.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:50.347 issued rwts: total=81167,41086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:50.347 00:15:50.347 Run status group 0 (all jobs): 00:15:50.347 READ: bw=52.8MiB/s (55.4MB/s), 52.8MiB/s-52.8MiB/s (55.4MB/s-55.4MB/s), io=317MiB (332MB), run=6005-6005msec 00:15:50.347 WRITE: bw=31.7MiB/s (33.3MB/s), 31.7MiB/s-31.7MiB/s (33.3MB/s-33.3MB/s), io=160MiB (168MB), run=5060-5060msec 00:15:50.347 00:15:50.347 Disk stats (read/write): 00:15:50.347 nvme0n1: ios=80193/40296, merge=0/0, ticks=488784/200285, in_queue=689069, util=98.60% 00:15:50.347 10:02:48 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:50.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:50.347 10:02:48 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:50.347 10:02:48 -- common/autotest_common.sh@1208 -- # local i=0 00:15:50.347 10:02:48 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:50.347 10:02:48 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.347 10:02:48 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:50.347 10:02:48 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.347 10:02:48 -- common/autotest_common.sh@1220 -- # return 0 00:15:50.347 10:02:48 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.606 10:02:49 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:50.606 10:02:49 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:50.606 10:02:49 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:50.606 10:02:49 -- target/multipath.sh@144 -- # nvmftestfini 00:15:50.606 10:02:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:50.606 10:02:49 -- nvmf/common.sh@116 -- # sync 00:15:50.606 10:02:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:50.606 10:02:49 -- nvmf/common.sh@119 -- # set +e 00:15:50.606 10:02:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:50.606 10:02:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:50.606 rmmod nvme_tcp 00:15:50.606 rmmod nvme_fabrics 00:15:50.606 rmmod nvme_keyring 00:15:50.606 10:02:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:50.606 10:02:49 -- nvmf/common.sh@123 -- # set -e 00:15:50.606 10:02:49 -- nvmf/common.sh@124 -- # return 0 00:15:50.606 10:02:49 -- nvmf/common.sh@477 -- # '[' -n 85543 ']' 00:15:50.606 10:02:49 -- nvmf/common.sh@478 -- # killprocess 85543 00:15:50.606 10:02:49 -- common/autotest_common.sh@936 -- # '[' -z 85543 ']' 00:15:50.606 10:02:49 -- common/autotest_common.sh@940 -- # kill -0 85543 00:15:50.606 10:02:49 -- common/autotest_common.sh@941 -- # uname 00:15:50.606 10:02:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:50.606 10:02:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85543 00:15:50.606 killing process with pid 85543 00:15:50.606 10:02:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:50.606 10:02:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:50.606 10:02:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85543' 00:15:50.606 10:02:49 -- common/autotest_common.sh@955 -- # kill 85543 00:15:50.606 10:02:49 -- common/autotest_common.sh@960 -- # wait 85543 00:15:50.865 10:02:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:50.865 10:02:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:50.865 10:02:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:50.865 10:02:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:50.865 10:02:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:50.865 10:02:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.865 10:02:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.865 10:02:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.865 10:02:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:50.865 00:15:50.865 real 0m20.547s 00:15:50.865 user 1m19.849s 00:15:50.865 sys 0m7.118s 00:15:50.865 10:02:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:50.865 ************************************ 00:15:50.865 END TEST nvmf_multipath 00:15:50.865 ************************************ 00:15:50.865 10:02:49 -- common/autotest_common.sh@10 -- # set +x 00:15:51.125 10:02:49 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:51.125 10:02:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:51.125 10:02:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:51.125 10:02:49 -- common/autotest_common.sh@10 -- # set +x 00:15:51.125 ************************************ 00:15:51.125 START TEST nvmf_zcopy 00:15:51.125 ************************************ 00:15:51.125 10:02:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:51.125 * Looking for test storage... 00:15:51.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:51.125 10:02:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:51.125 10:02:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:51.125 10:02:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:51.125 10:02:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:51.125 10:02:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:51.125 10:02:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:51.125 10:02:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:51.125 10:02:49 -- scripts/common.sh@335 -- # IFS=.-: 00:15:51.125 10:02:49 -- scripts/common.sh@335 -- # read -ra ver1 00:15:51.125 10:02:49 -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.125 10:02:49 -- scripts/common.sh@336 -- # read -ra ver2 00:15:51.125 10:02:49 -- scripts/common.sh@337 -- # local 'op=<' 00:15:51.125 10:02:49 -- scripts/common.sh@339 -- # ver1_l=2 00:15:51.125 10:02:49 -- scripts/common.sh@340 -- # ver2_l=1 00:15:51.125 10:02:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:51.125 10:02:49 -- scripts/common.sh@343 -- # case "$op" in 00:15:51.125 10:02:49 -- scripts/common.sh@344 -- # : 1 00:15:51.125 10:02:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:51.125 10:02:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.125 10:02:49 -- scripts/common.sh@364 -- # decimal 1 00:15:51.125 10:02:49 -- scripts/common.sh@352 -- # local d=1 00:15:51.125 10:02:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.125 10:02:49 -- scripts/common.sh@354 -- # echo 1 00:15:51.125 10:02:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:51.125 10:02:49 -- scripts/common.sh@365 -- # decimal 2 00:15:51.125 10:02:49 -- scripts/common.sh@352 -- # local d=2 00:15:51.125 10:02:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.125 10:02:49 -- scripts/common.sh@354 -- # echo 2 00:15:51.125 10:02:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:51.125 10:02:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:51.125 10:02:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:51.125 10:02:49 -- scripts/common.sh@367 -- # return 0 00:15:51.125 10:02:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.125 10:02:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:51.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.125 --rc genhtml_branch_coverage=1 00:15:51.125 --rc genhtml_function_coverage=1 00:15:51.125 --rc genhtml_legend=1 00:15:51.125 --rc geninfo_all_blocks=1 00:15:51.125 --rc geninfo_unexecuted_blocks=1 00:15:51.125 00:15:51.125 ' 00:15:51.125 10:02:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:51.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.125 --rc genhtml_branch_coverage=1 00:15:51.125 --rc genhtml_function_coverage=1 00:15:51.125 --rc genhtml_legend=1 00:15:51.125 --rc geninfo_all_blocks=1 00:15:51.125 --rc geninfo_unexecuted_blocks=1 00:15:51.125 00:15:51.125 ' 00:15:51.125 10:02:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:51.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.125 --rc genhtml_branch_coverage=1 00:15:51.125 --rc genhtml_function_coverage=1 00:15:51.125 --rc genhtml_legend=1 00:15:51.125 --rc geninfo_all_blocks=1 00:15:51.125 --rc geninfo_unexecuted_blocks=1 00:15:51.125 00:15:51.125 ' 00:15:51.125 10:02:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:51.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.125 --rc genhtml_branch_coverage=1 00:15:51.125 --rc genhtml_function_coverage=1 00:15:51.125 --rc genhtml_legend=1 00:15:51.125 --rc geninfo_all_blocks=1 00:15:51.125 --rc geninfo_unexecuted_blocks=1 00:15:51.125 00:15:51.125 ' 00:15:51.125 10:02:49 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.125 10:02:49 -- nvmf/common.sh@7 -- # uname -s 00:15:51.125 10:02:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.125 10:02:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.125 10:02:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.125 10:02:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.125 10:02:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.125 10:02:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.125 10:02:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.125 10:02:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.125 10:02:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.125 10:02:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.125 10:02:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:15:51.125 10:02:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:15:51.125 10:02:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.125 10:02:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.125 10:02:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:51.125 10:02:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:51.125 10:02:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.125 10:02:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.125 10:02:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.125 10:02:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.125 10:02:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.125 10:02:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.125 10:02:49 -- paths/export.sh@5 -- # export PATH 00:15:51.125 10:02:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.125 10:02:49 -- nvmf/common.sh@46 -- # : 0 00:15:51.125 10:02:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:51.125 10:02:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:51.125 10:02:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:51.125 10:02:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.125 10:02:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.125 10:02:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:51.125 10:02:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:51.125 10:02:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:51.125 10:02:49 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:51.125 10:02:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:51.125 10:02:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.125 10:02:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:51.125 10:02:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:51.125 10:02:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:51.125 10:02:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.125 10:02:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.125 10:02:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.125 10:02:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:51.125 10:02:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:51.125 10:02:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:51.125 10:02:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:51.125 10:02:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:51.125 10:02:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:51.125 10:02:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.125 10:02:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.125 10:02:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:51.125 10:02:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:51.126 10:02:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:51.126 10:02:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:51.126 10:02:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:51.126 10:02:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.126 10:02:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:51.126 10:02:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:51.126 10:02:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:51.126 10:02:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:51.126 10:02:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:51.126 10:02:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:51.385 Cannot find device "nvmf_tgt_br" 00:15:51.385 10:02:49 -- nvmf/common.sh@154 -- # true 00:15:51.385 10:02:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.385 Cannot find device "nvmf_tgt_br2" 00:15:51.385 10:02:49 -- nvmf/common.sh@155 -- # true 00:15:51.385 10:02:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:51.385 10:02:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:51.385 Cannot find device "nvmf_tgt_br" 00:15:51.385 10:02:49 -- nvmf/common.sh@157 -- # true 00:15:51.385 10:02:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:51.385 Cannot find device "nvmf_tgt_br2" 00:15:51.385 10:02:49 -- nvmf/common.sh@158 -- # true 00:15:51.385 10:02:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:51.385 10:02:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:51.385 10:02:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.385 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.385 10:02:49 -- nvmf/common.sh@161 -- # true 00:15:51.385 10:02:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.385 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.385 10:02:49 -- nvmf/common.sh@162 -- # true 00:15:51.385 10:02:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:51.385 10:02:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:51.385 10:02:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:51.385 10:02:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:51.385 10:02:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:51.385 10:02:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:51.385 10:02:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:51.385 10:02:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:51.385 10:02:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:51.385 10:02:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:51.385 10:02:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:51.385 10:02:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:51.385 10:02:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:51.385 10:02:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:51.385 10:02:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:51.385 10:02:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:51.385 10:02:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:51.385 10:02:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:51.385 10:02:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:51.385 10:02:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.385 10:02:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.643 10:02:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.643 10:02:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.643 10:02:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:51.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:15:51.643 00:15:51.643 --- 10.0.0.2 ping statistics --- 00:15:51.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.643 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:51.643 10:02:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:51.643 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.643 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:15:51.643 00:15:51.643 --- 10.0.0.3 ping statistics --- 00:15:51.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.643 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:51.643 10:02:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:51.643 00:15:51.643 --- 10.0.0.1 ping statistics --- 00:15:51.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.643 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:51.643 10:02:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.643 10:02:50 -- nvmf/common.sh@421 -- # return 0 00:15:51.643 10:02:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:51.643 10:02:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.643 10:02:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:51.643 10:02:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:51.643 10:02:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.643 10:02:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:51.643 10:02:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:51.643 10:02:50 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:51.643 10:02:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:51.643 10:02:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:51.643 10:02:50 -- common/autotest_common.sh@10 -- # set +x 00:15:51.643 10:02:50 -- nvmf/common.sh@469 -- # nvmfpid=86145 00:15:51.643 10:02:50 -- nvmf/common.sh@470 -- # waitforlisten 86145 00:15:51.643 10:02:50 -- common/autotest_common.sh@829 -- # '[' -z 86145 ']' 00:15:51.643 10:02:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:51.643 10:02:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.643 10:02:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.643 10:02:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.643 10:02:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.643 10:02:50 -- common/autotest_common.sh@10 -- # set +x 00:15:51.643 [2024-12-16 10:02:50.120557] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:51.643 [2024-12-16 10:02:50.120665] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.643 [2024-12-16 10:02:50.261528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.902 [2024-12-16 10:02:50.318148] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:51.902 [2024-12-16 10:02:50.318288] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.902 [2024-12-16 10:02:50.318298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.902 [2024-12-16 10:02:50.318306] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.902 [2024-12-16 10:02:50.318335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.838 10:02:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.838 10:02:51 -- common/autotest_common.sh@862 -- # return 0 00:15:52.838 10:02:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:52.838 10:02:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:52.838 10:02:51 -- common/autotest_common.sh@10 -- # set +x 00:15:52.838 10:02:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.838 10:02:51 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:52.838 10:02:51 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:52.838 10:02:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.838 10:02:51 -- common/autotest_common.sh@10 -- # set +x 00:15:52.838 [2024-12-16 10:02:51.183086] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.838 10:02:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.838 10:02:51 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:52.838 10:02:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.838 10:02:51 -- common/autotest_common.sh@10 -- # set +x 00:15:52.838 10:02:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.838 10:02:51 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.838 10:02:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.838 10:02:51 -- common/autotest_common.sh@10 -- # set +x 00:15:52.838 [2024-12-16 10:02:51.199204] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.838 10:02:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.838 10:02:51 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:52.838 10:02:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.838 10:02:51 -- common/autotest_common.sh@10 -- # set +x 00:15:52.838 10:02:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.838 10:02:51 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:52.838 10:02:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.838 10:02:51 -- common/autotest_common.sh@10 -- # set +x 00:15:52.838 malloc0 00:15:52.838 10:02:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.838 10:02:51 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:52.838 10:02:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.838 10:02:51 -- common/autotest_common.sh@10 -- # set +x 00:15:52.838 10:02:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.838 10:02:51 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:52.838 10:02:51 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:52.838 10:02:51 -- nvmf/common.sh@520 -- # config=() 00:15:52.838 10:02:51 -- nvmf/common.sh@520 -- # local subsystem config 00:15:52.838 10:02:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:52.838 10:02:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:52.838 { 00:15:52.838 "params": { 00:15:52.838 "name": "Nvme$subsystem", 00:15:52.838 "trtype": "$TEST_TRANSPORT", 00:15:52.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:52.838 "adrfam": "ipv4", 00:15:52.838 "trsvcid": "$NVMF_PORT", 00:15:52.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:52.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:52.838 "hdgst": ${hdgst:-false}, 00:15:52.838 "ddgst": ${ddgst:-false} 00:15:52.838 }, 00:15:52.838 "method": "bdev_nvme_attach_controller" 00:15:52.838 } 00:15:52.838 EOF 00:15:52.838 )") 00:15:52.838 10:02:51 -- nvmf/common.sh@542 -- # cat 00:15:52.838 10:02:51 -- nvmf/common.sh@544 -- # jq . 00:15:52.838 10:02:51 -- nvmf/common.sh@545 -- # IFS=, 00:15:52.838 10:02:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:52.838 "params": { 00:15:52.838 "name": "Nvme1", 00:15:52.838 "trtype": "tcp", 00:15:52.838 "traddr": "10.0.0.2", 00:15:52.838 "adrfam": "ipv4", 00:15:52.838 "trsvcid": "4420", 00:15:52.838 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.838 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:52.838 "hdgst": false, 00:15:52.838 "ddgst": false 00:15:52.838 }, 00:15:52.838 "method": "bdev_nvme_attach_controller" 00:15:52.838 }' 00:15:52.838 [2024-12-16 10:02:51.289998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:52.838 [2024-12-16 10:02:51.290086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86202 ] 00:15:52.838 [2024-12-16 10:02:51.429104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.097 [2024-12-16 10:02:51.482896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.097 Running I/O for 10 seconds... 00:16:03.073 00:16:03.073 Latency(us) 00:16:03.073 [2024-12-16T10:03:01.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.073 [2024-12-16T10:03:01.698Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:03.073 Verification LBA range: start 0x0 length 0x1000 00:16:03.073 Nvme1n1 : 10.01 11071.07 86.49 0.00 0.00 11533.24 897.40 19660.80 00:16:03.073 [2024-12-16T10:03:01.698Z] =================================================================================================================== 00:16:03.073 [2024-12-16T10:03:01.698Z] Total : 11071.07 86.49 0.00 0.00 11533.24 897.40 19660.80 00:16:03.345 10:03:01 -- target/zcopy.sh@39 -- # perfpid=86314 00:16:03.345 10:03:01 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:03.345 10:03:01 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:03.345 10:03:01 -- nvmf/common.sh@520 -- # config=() 00:16:03.345 10:03:01 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:03.345 10:03:01 -- nvmf/common.sh@520 -- # local subsystem config 00:16:03.345 10:03:01 -- common/autotest_common.sh@10 -- # set +x 00:16:03.345 10:03:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:03.345 10:03:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:03.345 { 00:16:03.345 "params": { 00:16:03.345 "name": "Nvme$subsystem", 00:16:03.345 "trtype": "$TEST_TRANSPORT", 00:16:03.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:03.345 "adrfam": "ipv4", 00:16:03.345 "trsvcid": "$NVMF_PORT", 00:16:03.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:03.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:03.345 "hdgst": ${hdgst:-false}, 00:16:03.345 "ddgst": ${ddgst:-false} 00:16:03.345 }, 00:16:03.345 "method": "bdev_nvme_attach_controller" 00:16:03.345 } 00:16:03.345 EOF 00:16:03.345 )") 00:16:03.345 10:03:01 -- nvmf/common.sh@542 -- # cat 00:16:03.345 [2024-12-16 10:03:01.856621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.345 [2024-12-16 10:03:01.856680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.345 10:03:01 -- nvmf/common.sh@544 -- # jq . 00:16:03.345 10:03:01 -- nvmf/common.sh@545 -- # IFS=, 00:16:03.345 10:03:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:03.345 "params": { 00:16:03.345 "name": "Nvme1", 00:16:03.345 "trtype": "tcp", 00:16:03.345 "traddr": "10.0.0.2", 00:16:03.345 "adrfam": "ipv4", 00:16:03.345 "trsvcid": "4420", 00:16:03.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:03.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:03.345 "hdgst": false, 00:16:03.345 "ddgst": false 00:16:03.345 }, 00:16:03.345 "method": "bdev_nvme_attach_controller" 00:16:03.345 }' 00:16:03.345 2024/12/16 10:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.345 [2024-12-16 10:03:01.868559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.345 [2024-12-16 10:03:01.868590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.345 2024/12/16 10:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.345 [2024-12-16 10:03:01.880547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.345 [2024-12-16 10:03:01.880570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.345 2024/12/16 10:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.345 [2024-12-16 10:03:01.892549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.345 [2024-12-16 10:03:01.892570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.345 2024/12/16 10:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.345 [2024-12-16 10:03:01.904579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.345 [2024-12-16 10:03:01.904604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.345 2024/12/16 10:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.345 [2024-12-16 10:03:01.910315] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:03.345 [2024-12-16 10:03:01.910456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86314 ] 00:16:03.345 [2024-12-16 10:03:01.916561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.345 [2024-12-16 10:03:01.916583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.346 2024/12/16 10:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.346 [2024-12-16 10:03:01.928571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.346 [2024-12-16 10:03:01.928591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.346 2024/12/16 10:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.346 [2024-12-16 10:03:01.940557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.346 [2024-12-16 10:03:01.940592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.346 2024/12/16 10:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.346 [2024-12-16 10:03:01.952568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.346 [2024-12-16 10:03:01.952588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.346 2024/12/16 10:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:01.964571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:01.964592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:01.976572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:01.976592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:01.988575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:01.988595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.000578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.000598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.012579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.012599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.024584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.024604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.036592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.036615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.048588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.048609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 [2024-12-16 10:03:02.050445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.060605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.060628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.072639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.072659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.084632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.084654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.096656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.096680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.108649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.108670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 [2024-12-16 10:03:02.112251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.120660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.120685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.132690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.132713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.144679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.144705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.156700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.156729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.168684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.168706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.620 [2024-12-16 10:03:02.180689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.620 [2024-12-16 10:03:02.180712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.620 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.621 [2024-12-16 10:03:02.192706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.621 [2024-12-16 10:03:02.192733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.621 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.621 [2024-12-16 10:03:02.204688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.621 [2024-12-16 10:03:02.204711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.621 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.621 [2024-12-16 10:03:02.216691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.621 [2024-12-16 10:03:02.216716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.621 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.621 [2024-12-16 10:03:02.228724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.621 [2024-12-16 10:03:02.228753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.621 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.621 [2024-12-16 10:03:02.236698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.621 [2024-12-16 10:03:02.236728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.621 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.248754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.248798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.260724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.260769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.272725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.272765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.284744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.284771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 Running I/O for 5 seconds... 00:16:03.880 [2024-12-16 10:03:02.296740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.296763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.313915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.313949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.329668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.329700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.348470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.348517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.363141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.363189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.378946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.378990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.396694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.396725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.411821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.411869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.428128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.428160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.443993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.444039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.461791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.461877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.476807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.476852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.880 [2024-12-16 10:03:02.495277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.880 [2024-12-16 10:03:02.495322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.880 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.139 [2024-12-16 10:03:02.509718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.139 [2024-12-16 10:03:02.509763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.139 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.139 [2024-12-16 10:03:02.525177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.139 [2024-12-16 10:03:02.525237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.139 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.139 [2024-12-16 10:03:02.535383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.139 [2024-12-16 10:03:02.535441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.139 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.139 [2024-12-16 10:03:02.549259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.139 [2024-12-16 10:03:02.549306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.139 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.139 [2024-12-16 10:03:02.565313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.139 [2024-12-16 10:03:02.565357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.139 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.139 [2024-12-16 10:03:02.581916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.139 [2024-12-16 10:03:02.581950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.139 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.139 [2024-12-16 10:03:02.598568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.139 [2024-12-16 10:03:02.598615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.139 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.139 [2024-12-16 10:03:02.614137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.139 [2024-12-16 10:03:02.614217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.139 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.139 [2024-12-16 10:03:02.630736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.139 [2024-12-16 10:03:02.630782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.139 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.139 [2024-12-16 10:03:02.648093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.139 [2024-12-16 10:03:02.648138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.139 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.139 [2024-12-16 10:03:02.664410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.139 [2024-12-16 10:03:02.664456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.139 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.139 [2024-12-16 10:03:02.681469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.139 [2024-12-16 10:03:02.681517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.140 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.140 [2024-12-16 10:03:02.697283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.140 [2024-12-16 10:03:02.697330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.140 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.140 [2024-12-16 10:03:02.715466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.140 [2024-12-16 10:03:02.715513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.140 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.140 [2024-12-16 10:03:02.730953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.140 [2024-12-16 10:03:02.730999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.140 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.140 [2024-12-16 10:03:02.742664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.140 [2024-12-16 10:03:02.742710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.140 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.140 [2024-12-16 10:03:02.758579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.140 [2024-12-16 10:03:02.758626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.140 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:02.775497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:02.775545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:02.792067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:02.792115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:02.808379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:02.808425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:02.825236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:02.825284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:02.842273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:02.842320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:02.856860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:02.856907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:02.872653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:02.872700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:02.888908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:02.888954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:02.906362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:02.906418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:02.921081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:02.921126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:02.936395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:02.936441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:02.954653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:02.954700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:02.969704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:02.969750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:02.981572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:02.981617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:02.996672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:02.996719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.399 [2024-12-16 10:03:03.014238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.399 [2024-12-16 10:03:03.014285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.399 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.028488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.028534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.044094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.044142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.061075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.061121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.077297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.077342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.094450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.094497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.111201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.111248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.127539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.127587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.144816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.144863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.162626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.162673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.178890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.178938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.194629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.194677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.205707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.205752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.221878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.221926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.238799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.238847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.255298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.255344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.659 [2024-12-16 10:03:03.272090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.659 [2024-12-16 10:03:03.272136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.659 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.918 [2024-12-16 10:03:03.288724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.918 [2024-12-16 10:03:03.288771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.918 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.918 [2024-12-16 10:03:03.305269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.918 [2024-12-16 10:03:03.305302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.918 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.918 [2024-12-16 10:03:03.322260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.918 [2024-12-16 10:03:03.322293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.918 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.919 [2024-12-16 10:03:03.338930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.919 [2024-12-16 10:03:03.338962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.919 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.919 [2024-12-16 10:03:03.354257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.919 [2024-12-16 10:03:03.354294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.919 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.919 [2024-12-16 10:03:03.365353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.919 [2024-12-16 10:03:03.365423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.919 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.919 [2024-12-16 10:03:03.382246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.919 [2024-12-16 10:03:03.382294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.919 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.919 [2024-12-16 10:03:03.397290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.919 [2024-12-16 10:03:03.397346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.919 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.919 [2024-12-16 10:03:03.407076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.919 [2024-12-16 10:03:03.407129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.919 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.919 [2024-12-16 10:03:03.423243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.919 [2024-12-16 10:03:03.423295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.919 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.919 [2024-12-16 10:03:03.440345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.919 [2024-12-16 10:03:03.440425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.919 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.919 [2024-12-16 10:03:03.450401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.919 [2024-12-16 10:03:03.450449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.919 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.919 [2024-12-16 10:03:03.464793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.919 [2024-12-16 10:03:03.464847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.919 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.919 [2024-12-16 10:03:03.480131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.919 [2024-12-16 10:03:03.480184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.919 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.919 [2024-12-16 10:03:03.496242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.919 [2024-12-16 10:03:03.496296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.919 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.919 [2024-12-16 10:03:03.513765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.919 [2024-12-16 10:03:03.513844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.919 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:04.919 [2024-12-16 10:03:03.529336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.919 [2024-12-16 10:03:03.529420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.919 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.178 [2024-12-16 10:03:03.546647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.178 [2024-12-16 10:03:03.546700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.178 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.178 [2024-12-16 10:03:03.563292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.178 [2024-12-16 10:03:03.563348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.178 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.178 [2024-12-16 10:03:03.574641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.178 [2024-12-16 10:03:03.574692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.178 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.178 [2024-12-16 10:03:03.591014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.178 [2024-12-16 10:03:03.591067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.178 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.178 [2024-12-16 10:03:03.607115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.178 [2024-12-16 10:03:03.607170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.178 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.178 [2024-12-16 10:03:03.623677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.178 [2024-12-16 10:03:03.623731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.178 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.178 [2024-12-16 10:03:03.640105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.178 [2024-12-16 10:03:03.640158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.178 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.178 [2024-12-16 10:03:03.657451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.178 [2024-12-16 10:03:03.657504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.178 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.178 [2024-12-16 10:03:03.672409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.178 [2024-12-16 10:03:03.672463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.178 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.178 [2024-12-16 10:03:03.687706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.178 [2024-12-16 10:03:03.687760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.178 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.178 [2024-12-16 10:03:03.704792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.178 [2024-12-16 10:03:03.704847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.179 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.179 [2024-12-16 10:03:03.720268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.179 [2024-12-16 10:03:03.720322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.179 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.179 [2024-12-16 10:03:03.735026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.179 [2024-12-16 10:03:03.735079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.179 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.179 [2024-12-16 10:03:03.749695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.179 [2024-12-16 10:03:03.749748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.179 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.179 [2024-12-16 10:03:03.764818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.179 [2024-12-16 10:03:03.764871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.179 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.179 [2024-12-16 10:03:03.782669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.179 [2024-12-16 10:03:03.782722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.179 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.179 [2024-12-16 10:03:03.798994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.179 [2024-12-16 10:03:03.799048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:03.810054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:03.810109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:03.826746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:03.826799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:03.842759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:03.842812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:03.859483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:03.859532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:03.877229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:03.877282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:03.892836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:03.892886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:03.910036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:03.910075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:03.925801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:03.925863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:03.936734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:03.936786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:03.952274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:03.952309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:03.968916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:03.968968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:03.985566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:03.985618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:04.002047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:04.002101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:04.018988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:04.019043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:04.035131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:04.035184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.438 [2024-12-16 10:03:04.051787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.438 [2024-12-16 10:03:04.051842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.438 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.697 [2024-12-16 10:03:04.068364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.697 [2024-12-16 10:03:04.068408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.697 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.697 [2024-12-16 10:03:04.084828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.697 [2024-12-16 10:03:04.084881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.697 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.697 [2024-12-16 10:03:04.102111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.697 [2024-12-16 10:03:04.102188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.697 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.697 [2024-12-16 10:03:04.117216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.697 [2024-12-16 10:03:04.117268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.697 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.697 [2024-12-16 10:03:04.133786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.697 [2024-12-16 10:03:04.133874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.697 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.697 [2024-12-16 10:03:04.150911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.697 [2024-12-16 10:03:04.150964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.697 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.697 [2024-12-16 10:03:04.166927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.697 [2024-12-16 10:03:04.166980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.697 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.697 [2024-12-16 10:03:04.183638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.697 [2024-12-16 10:03:04.183676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.697 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.697 [2024-12-16 10:03:04.200897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.697 [2024-12-16 10:03:04.200949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.697 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.697 [2024-12-16 10:03:04.217370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.697 [2024-12-16 10:03:04.217422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.697 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.697 [2024-12-16 10:03:04.234225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.697 [2024-12-16 10:03:04.234262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.698 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.698 [2024-12-16 10:03:04.251205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.698 [2024-12-16 10:03:04.251258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.698 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.698 [2024-12-16 10:03:04.266917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.698 [2024-12-16 10:03:04.266953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.698 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.698 [2024-12-16 10:03:04.283957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.698 [2024-12-16 10:03:04.284013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.698 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.698 [2024-12-16 10:03:04.300738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.698 [2024-12-16 10:03:04.300793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.698 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.698 [2024-12-16 10:03:04.317344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.698 [2024-12-16 10:03:04.317407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.698 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.334376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.334423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.350323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.350388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.367478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.367531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.383453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.383505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.400867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.400924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.416824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.416876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.433929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.433972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.450033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.450090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.467810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.467861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.482584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.482638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.500025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.500078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.515044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.515081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.531198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.531252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.546991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.547044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.558454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.558506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.957 [2024-12-16 10:03:04.574794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.957 [2024-12-16 10:03:04.574831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.957 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.589706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.589758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.605728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.605782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.622068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.622102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.636867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.636917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.652949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.653001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.670341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.670403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.686735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.686788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.703026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.703079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.720413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.720449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.736817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.736869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.753504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.753555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.770039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.770092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.786640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.786696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.803227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.803263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.819916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.819968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.217 [2024-12-16 10:03:04.835332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.217 [2024-12-16 10:03:04.835396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.217 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.476 [2024-12-16 10:03:04.846182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.476 [2024-12-16 10:03:04.846243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.476 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.476 [2024-12-16 10:03:04.862291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.476 [2024-12-16 10:03:04.862344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.476 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.476 [2024-12-16 10:03:04.878182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.476 [2024-12-16 10:03:04.878234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.476 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.476 [2024-12-16 10:03:04.895099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.476 [2024-12-16 10:03:04.895152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.476 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.476 [2024-12-16 10:03:04.911512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.476 [2024-12-16 10:03:04.911569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.476 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.476 [2024-12-16 10:03:04.928874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.476 [2024-12-16 10:03:04.928926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.476 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.476 [2024-12-16 10:03:04.945320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.476 [2024-12-16 10:03:04.945396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.476 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.476 [2024-12-16 10:03:04.961957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.476 [2024-12-16 10:03:04.962012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.476 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.476 [2024-12-16 10:03:04.977644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.476 [2024-12-16 10:03:04.977678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.476 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.477 [2024-12-16 10:03:04.988647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.477 [2024-12-16 10:03:04.988696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.477 2024/12/16 10:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.477 [2024-12-16 10:03:05.004149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.477 [2024-12-16 10:03:05.004202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.477 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.477 [2024-12-16 10:03:05.020555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.477 [2024-12-16 10:03:05.020607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.477 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.477 [2024-12-16 10:03:05.037465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.477 [2024-12-16 10:03:05.037520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.477 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.477 [2024-12-16 10:03:05.054432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.477 [2024-12-16 10:03:05.054467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.477 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.477 [2024-12-16 10:03:05.070270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.477 [2024-12-16 10:03:05.070322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.477 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.477 [2024-12-16 10:03:05.086934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.477 [2024-12-16 10:03:05.086986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.477 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.735 [2024-12-16 10:03:05.102806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.735 [2024-12-16 10:03:05.102858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.735 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.735 [2024-12-16 10:03:05.117446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.735 [2024-12-16 10:03:05.117497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.735 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.735 [2024-12-16 10:03:05.132348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.735 [2024-12-16 10:03:05.132397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.735 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.735 [2024-12-16 10:03:05.150161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.735 [2024-12-16 10:03:05.150215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.735 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.735 [2024-12-16 10:03:05.166159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.735 [2024-12-16 10:03:05.166229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.735 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.735 [2024-12-16 10:03:05.183066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.735 [2024-12-16 10:03:05.183120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.735 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.735 [2024-12-16 10:03:05.199562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.735 [2024-12-16 10:03:05.199614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.735 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.735 [2024-12-16 10:03:05.216073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.735 [2024-12-16 10:03:05.216125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.735 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.735 [2024-12-16 10:03:05.233056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.735 [2024-12-16 10:03:05.233106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.735 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.735 [2024-12-16 10:03:05.249190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.735 [2024-12-16 10:03:05.249242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.735 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.736 [2024-12-16 10:03:05.266523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.736 [2024-12-16 10:03:05.266560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.736 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.736 [2024-12-16 10:03:05.282596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.736 [2024-12-16 10:03:05.282649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.736 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.736 [2024-12-16 10:03:05.299224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.736 [2024-12-16 10:03:05.299260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.736 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.736 [2024-12-16 10:03:05.315325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.736 [2024-12-16 10:03:05.315390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.736 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.736 [2024-12-16 10:03:05.331844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.736 [2024-12-16 10:03:05.331897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.736 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.736 [2024-12-16 10:03:05.348801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.736 [2024-12-16 10:03:05.348854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.736 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.994 [2024-12-16 10:03:05.364795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.994 [2024-12-16 10:03:05.364847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.994 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.994 [2024-12-16 10:03:05.381407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.994 [2024-12-16 10:03:05.381457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.994 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.994 [2024-12-16 10:03:05.398499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.994 [2024-12-16 10:03:05.398550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.994 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.994 [2024-12-16 10:03:05.414102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.994 [2024-12-16 10:03:05.414171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.995 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.995 [2024-12-16 10:03:05.426627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.995 [2024-12-16 10:03:05.426679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.995 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.995 [2024-12-16 10:03:05.441777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.995 [2024-12-16 10:03:05.441876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.995 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.995 [2024-12-16 10:03:05.459108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.995 [2024-12-16 10:03:05.459144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.995 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.995 [2024-12-16 10:03:05.475048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.995 [2024-12-16 10:03:05.475102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.995 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.995 [2024-12-16 10:03:05.492549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.995 [2024-12-16 10:03:05.492603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.995 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.995 [2024-12-16 10:03:05.507387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.995 [2024-12-16 10:03:05.507438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.995 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.995 [2024-12-16 10:03:05.517187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.995 [2024-12-16 10:03:05.517236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.995 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.995 [2024-12-16 10:03:05.532528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.995 [2024-12-16 10:03:05.532581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.995 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.995 [2024-12-16 10:03:05.549971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.995 [2024-12-16 10:03:05.550024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.995 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.995 [2024-12-16 10:03:05.566464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.995 [2024-12-16 10:03:05.566499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.995 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.995 [2024-12-16 10:03:05.582463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.995 [2024-12-16 10:03:05.582515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.995 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.995 [2024-12-16 10:03:05.594423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.995 [2024-12-16 10:03:05.594473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.995 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.995 [2024-12-16 10:03:05.610466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.995 [2024-12-16 10:03:05.610502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.995 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.626620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.626672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.643243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.643292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.660245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.660298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.676544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.676598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.693628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.693662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.709656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.709708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.727217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.727253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.743524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.743575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.759125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.759158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.770862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.770915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.787384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.787437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.803861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.803913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.821024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.821074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.836935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.836987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.853400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.853451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.254 [2024-12-16 10:03:05.870320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.254 [2024-12-16 10:03:05.870396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.254 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.513 [2024-12-16 10:03:05.886903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.513 [2024-12-16 10:03:05.886955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.513 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.513 [2024-12-16 10:03:05.903160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.513 [2024-12-16 10:03:05.903213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.513 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.513 [2024-12-16 10:03:05.919650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.513 [2024-12-16 10:03:05.919686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.513 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.513 [2024-12-16 10:03:05.936137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.513 [2024-12-16 10:03:05.936189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.513 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.513 [2024-12-16 10:03:05.953013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.513 [2024-12-16 10:03:05.953062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.513 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.513 [2024-12-16 10:03:05.969738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.513 [2024-12-16 10:03:05.969787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.513 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.513 [2024-12-16 10:03:05.986163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.513 [2024-12-16 10:03:05.986215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.513 2024/12/16 10:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.513 [2024-12-16 10:03:06.003087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.513 [2024-12-16 10:03:06.003122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.513 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.513 [2024-12-16 10:03:06.020121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.513 [2024-12-16 10:03:06.020168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.513 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.514 [2024-12-16 10:03:06.035334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.514 [2024-12-16 10:03:06.035396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.514 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.514 [2024-12-16 10:03:06.046551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.514 [2024-12-16 10:03:06.046602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.514 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.514 [2024-12-16 10:03:06.062245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.514 [2024-12-16 10:03:06.062294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.514 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.514 [2024-12-16 10:03:06.078736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.514 [2024-12-16 10:03:06.078769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.514 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.514 [2024-12-16 10:03:06.095981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.514 [2024-12-16 10:03:06.096032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.514 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.514 [2024-12-16 10:03:06.111813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.514 [2024-12-16 10:03:06.111860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.514 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.514 [2024-12-16 10:03:06.128930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.514 [2024-12-16 10:03:06.128980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.514 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.144902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.144951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.773 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.162014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.162051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.773 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.179073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.179126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.773 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.194690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.194726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.773 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.211432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.211485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.773 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.227442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.227477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.773 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.245064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.245112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.773 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.261543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.261592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.773 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.278234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.278286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.773 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.294596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.294648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.773 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.311248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.311299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.773 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.328248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.328299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.773 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.344556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.344606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.773 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.361168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.361218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.773 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.377877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.377915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.773 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.773 [2024-12-16 10:03:06.394703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.773 [2024-12-16 10:03:06.394756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.032 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.032 [2024-12-16 10:03:06.411511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.032 [2024-12-16 10:03:06.411562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.032 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.032 [2024-12-16 10:03:06.427709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.032 [2024-12-16 10:03:06.427759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.032 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.032 [2024-12-16 10:03:06.445000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.032 [2024-12-16 10:03:06.445050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.032 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.032 [2024-12-16 10:03:06.461173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.032 [2024-12-16 10:03:06.461224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.032 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.032 [2024-12-16 10:03:06.477812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.032 [2024-12-16 10:03:06.477870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.032 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.032 [2024-12-16 10:03:06.494881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.032 [2024-12-16 10:03:06.494933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.033 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.033 [2024-12-16 10:03:06.511039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.033 [2024-12-16 10:03:06.511090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.033 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.033 [2024-12-16 10:03:06.527535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.033 [2024-12-16 10:03:06.527590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.033 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.033 [2024-12-16 10:03:06.543319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.033 [2024-12-16 10:03:06.543383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.033 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.033 [2024-12-16 10:03:06.553065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.033 [2024-12-16 10:03:06.553114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.033 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.033 [2024-12-16 10:03:06.567936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.033 [2024-12-16 10:03:06.567986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.033 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.033 [2024-12-16 10:03:06.584083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.033 [2024-12-16 10:03:06.584134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.033 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.033 [2024-12-16 10:03:06.599963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.033 [2024-12-16 10:03:06.600015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.033 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.033 [2024-12-16 10:03:06.611246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.033 [2024-12-16 10:03:06.611298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.033 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.033 [2024-12-16 10:03:06.627287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.033 [2024-12-16 10:03:06.627341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.033 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.033 [2024-12-16 10:03:06.643727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.033 [2024-12-16 10:03:06.643779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.033 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.292 [2024-12-16 10:03:06.660657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.292 [2024-12-16 10:03:06.660708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.292 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.292 [2024-12-16 10:03:06.676862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.292 [2024-12-16 10:03:06.676914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.292 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.292 [2024-12-16 10:03:06.693367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.292 [2024-12-16 10:03:06.693418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.292 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.292 [2024-12-16 10:03:06.710091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.292 [2024-12-16 10:03:06.710144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.292 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.292 [2024-12-16 10:03:06.726626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.292 [2024-12-16 10:03:06.726679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.292 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.292 [2024-12-16 10:03:06.743304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.292 [2024-12-16 10:03:06.743384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.292 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.292 [2024-12-16 10:03:06.759296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.292 [2024-12-16 10:03:06.759349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.292 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.292 [2024-12-16 10:03:06.776431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.292 [2024-12-16 10:03:06.776482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.292 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.292 [2024-12-16 10:03:06.792984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.292 [2024-12-16 10:03:06.793034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.292 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.292 [2024-12-16 10:03:06.811082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.292 [2024-12-16 10:03:06.811135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.292 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.292 [2024-12-16 10:03:06.825512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.292 [2024-12-16 10:03:06.825559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.292 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.292 [2024-12-16 10:03:06.840762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.292 [2024-12-16 10:03:06.840813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.292 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.292 [2024-12-16 10:03:06.851582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.292 [2024-12-16 10:03:06.851634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.292 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.292 [2024-12-16 10:03:06.868118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.293 [2024-12-16 10:03:06.868187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.293 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.293 [2024-12-16 10:03:06.885424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.293 [2024-12-16 10:03:06.885475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.293 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.293 [2024-12-16 10:03:06.901239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.293 [2024-12-16 10:03:06.901293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.293 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.293 [2024-12-16 10:03:06.912636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.293 [2024-12-16 10:03:06.912673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.293 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.550 [2024-12-16 10:03:06.929020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.550 [2024-12-16 10:03:06.929077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.550 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.550 [2024-12-16 10:03:06.944994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.550 [2024-12-16 10:03:06.945044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.550 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.550 [2024-12-16 10:03:06.962128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.550 [2024-12-16 10:03:06.962194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.550 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.550 [2024-12-16 10:03:06.978892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.550 [2024-12-16 10:03:06.978945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.550 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.550 [2024-12-16 10:03:06.995607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.550 [2024-12-16 10:03:06.995658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.550 2024/12/16 10:03:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.550 [2024-12-16 10:03:07.012049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.550 [2024-12-16 10:03:07.012100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.550 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.550 [2024-12-16 10:03:07.028645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.550 [2024-12-16 10:03:07.028697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.550 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.550 [2024-12-16 10:03:07.045484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.550 [2024-12-16 10:03:07.045522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.550 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.551 [2024-12-16 10:03:07.062267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.551 [2024-12-16 10:03:07.062318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.551 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.551 [2024-12-16 10:03:07.078552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.551 [2024-12-16 10:03:07.078604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.551 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.551 [2024-12-16 10:03:07.095293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.551 [2024-12-16 10:03:07.095345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.551 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.551 [2024-12-16 10:03:07.112146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.551 [2024-12-16 10:03:07.112200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.551 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.551 [2024-12-16 10:03:07.128260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.551 [2024-12-16 10:03:07.128313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.551 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.551 [2024-12-16 10:03:07.145001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.551 [2024-12-16 10:03:07.145054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.551 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.551 [2024-12-16 10:03:07.162013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.551 [2024-12-16 10:03:07.162049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.551 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.809 [2024-12-16 10:03:07.178407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.809 [2024-12-16 10:03:07.178458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.809 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.809 [2024-12-16 10:03:07.195238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.809 [2024-12-16 10:03:07.195291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.809 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.809 [2024-12-16 10:03:07.211965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.809 [2024-12-16 10:03:07.212017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.809 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.809 [2024-12-16 10:03:07.228218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.809 [2024-12-16 10:03:07.228252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.809 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.809 [2024-12-16 10:03:07.244938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.809 [2024-12-16 10:03:07.244990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.809 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.809 [2024-12-16 10:03:07.261567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.809 [2024-12-16 10:03:07.261620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.809 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.809 [2024-12-16 10:03:07.278065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.809 [2024-12-16 10:03:07.278118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.809 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.809 [2024-12-16 10:03:07.295046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.809 [2024-12-16 10:03:07.295082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.809 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.809 00:16:08.809 Latency(us) 00:16:08.809 [2024-12-16T10:03:07.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.809 [2024-12-16T10:03:07.434Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:08.809 Nvme1n1 : 5.01 13379.80 104.53 0.00 0.00 9556.38 4170.47 20375.74 00:16:08.810 [2024-12-16T10:03:07.435Z] =================================================================================================================== 00:16:08.810 [2024-12-16T10:03:07.435Z] Total : 13379.80 104.53 0.00 0.00 9556.38 4170.47 20375.74 00:16:08.810 [2024-12-16 10:03:07.306127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.810 [2024-12-16 10:03:07.306208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.810 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.810 [2024-12-16 10:03:07.318111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.810 [2024-12-16 10:03:07.318159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.810 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.810 [2024-12-16 10:03:07.330115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.810 [2024-12-16 10:03:07.330183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.810 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.810 [2024-12-16 10:03:07.342115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.810 [2024-12-16 10:03:07.342151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.810 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.810 [2024-12-16 10:03:07.354102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.810 [2024-12-16 10:03:07.354169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.810 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.810 [2024-12-16 10:03:07.366123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.810 [2024-12-16 10:03:07.366205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.810 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.810 [2024-12-16 10:03:07.378129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.810 [2024-12-16 10:03:07.378213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.810 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.810 [2024-12-16 10:03:07.390130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.810 [2024-12-16 10:03:07.390198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.810 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.810 [2024-12-16 10:03:07.402147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.810 [2024-12-16 10:03:07.402212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.810 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.810 [2024-12-16 10:03:07.414121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.810 [2024-12-16 10:03:07.414203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.810 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.810 [2024-12-16 10:03:07.426179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.810 [2024-12-16 10:03:07.426215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.810 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.068 [2024-12-16 10:03:07.438131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.068 [2024-12-16 10:03:07.438175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.068 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.068 [2024-12-16 10:03:07.450126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.068 [2024-12-16 10:03:07.450179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.068 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.068 [2024-12-16 10:03:07.462146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.068 [2024-12-16 10:03:07.462381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.068 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.068 [2024-12-16 10:03:07.474155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.068 [2024-12-16 10:03:07.474188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.068 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.068 [2024-12-16 10:03:07.486147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.068 [2024-12-16 10:03:07.486177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.068 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.068 [2024-12-16 10:03:07.498120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.068 [2024-12-16 10:03:07.498347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.068 2024/12/16 10:03:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.068 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (86314) - No such process 00:16:09.068 10:03:07 -- target/zcopy.sh@49 -- # wait 86314 00:16:09.068 10:03:07 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.068 10:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.068 10:03:07 -- common/autotest_common.sh@10 -- # set +x 00:16:09.068 10:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.068 10:03:07 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:09.068 10:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.068 10:03:07 -- common/autotest_common.sh@10 -- # set +x 00:16:09.068 delay0 00:16:09.068 10:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.068 10:03:07 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:09.068 10:03:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.068 10:03:07 -- common/autotest_common.sh@10 -- # set +x 00:16:09.068 10:03:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.068 10:03:07 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:09.068 [2024-12-16 10:03:07.682099] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:15.630 Initializing NVMe Controllers 00:16:15.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:15.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:15.630 Initialization complete. Launching workers. 00:16:15.630 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 60 00:16:15.630 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 347, failed to submit 33 00:16:15.630 success 157, unsuccess 190, failed 0 00:16:15.630 10:03:13 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:15.630 10:03:13 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:15.630 10:03:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:15.630 10:03:13 -- nvmf/common.sh@116 -- # sync 00:16:15.630 10:03:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:15.630 10:03:13 -- nvmf/common.sh@119 -- # set +e 00:16:15.630 10:03:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:15.630 10:03:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:15.630 rmmod nvme_tcp 00:16:15.630 rmmod nvme_fabrics 00:16:15.630 rmmod nvme_keyring 00:16:15.630 10:03:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:15.630 10:03:13 -- nvmf/common.sh@123 -- # set -e 00:16:15.630 10:03:13 -- nvmf/common.sh@124 -- # return 0 00:16:15.630 10:03:13 -- nvmf/common.sh@477 -- # '[' -n 86145 ']' 00:16:15.630 10:03:13 -- nvmf/common.sh@478 -- # killprocess 86145 00:16:15.630 10:03:13 -- common/autotest_common.sh@936 -- # '[' -z 86145 ']' 00:16:15.630 10:03:13 -- common/autotest_common.sh@940 -- # kill -0 86145 00:16:15.630 10:03:13 -- common/autotest_common.sh@941 -- # uname 00:16:15.630 10:03:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:15.630 10:03:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86145 00:16:15.630 killing process with pid 86145 00:16:15.630 10:03:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:15.630 10:03:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:15.630 10:03:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86145' 00:16:15.630 10:03:13 -- common/autotest_common.sh@955 -- # kill 86145 00:16:15.630 10:03:13 -- common/autotest_common.sh@960 -- # wait 86145 00:16:15.630 10:03:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:15.630 10:03:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:15.630 10:03:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:15.630 10:03:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.630 10:03:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:15.630 10:03:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.630 10:03:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.630 10:03:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.630 10:03:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:15.630 00:16:15.630 real 0m24.595s 00:16:15.630 user 0m39.916s 00:16:15.630 sys 0m6.406s 00:16:15.630 ************************************ 00:16:15.630 END TEST nvmf_zcopy 00:16:15.630 ************************************ 00:16:15.630 10:03:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:15.630 10:03:14 -- common/autotest_common.sh@10 -- # set +x 00:16:15.630 10:03:14 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:15.630 10:03:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:15.630 10:03:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:15.630 10:03:14 -- common/autotest_common.sh@10 -- # set +x 00:16:15.630 ************************************ 00:16:15.630 START TEST nvmf_nmic 00:16:15.630 ************************************ 00:16:15.630 10:03:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:15.630 * Looking for test storage... 00:16:15.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:15.630 10:03:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:15.630 10:03:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:15.630 10:03:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:15.890 10:03:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:15.890 10:03:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:15.890 10:03:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:15.890 10:03:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:15.890 10:03:14 -- scripts/common.sh@335 -- # IFS=.-: 00:16:15.890 10:03:14 -- scripts/common.sh@335 -- # read -ra ver1 00:16:15.890 10:03:14 -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.890 10:03:14 -- scripts/common.sh@336 -- # read -ra ver2 00:16:15.890 10:03:14 -- scripts/common.sh@337 -- # local 'op=<' 00:16:15.890 10:03:14 -- scripts/common.sh@339 -- # ver1_l=2 00:16:15.890 10:03:14 -- scripts/common.sh@340 -- # ver2_l=1 00:16:15.890 10:03:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:15.890 10:03:14 -- scripts/common.sh@343 -- # case "$op" in 00:16:15.890 10:03:14 -- scripts/common.sh@344 -- # : 1 00:16:15.890 10:03:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:15.890 10:03:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.890 10:03:14 -- scripts/common.sh@364 -- # decimal 1 00:16:15.890 10:03:14 -- scripts/common.sh@352 -- # local d=1 00:16:15.890 10:03:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.890 10:03:14 -- scripts/common.sh@354 -- # echo 1 00:16:15.890 10:03:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:15.890 10:03:14 -- scripts/common.sh@365 -- # decimal 2 00:16:15.890 10:03:14 -- scripts/common.sh@352 -- # local d=2 00:16:15.890 10:03:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.890 10:03:14 -- scripts/common.sh@354 -- # echo 2 00:16:15.890 10:03:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:15.890 10:03:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:15.890 10:03:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:15.890 10:03:14 -- scripts/common.sh@367 -- # return 0 00:16:15.890 10:03:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.890 10:03:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:15.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.890 --rc genhtml_branch_coverage=1 00:16:15.890 --rc genhtml_function_coverage=1 00:16:15.890 --rc genhtml_legend=1 00:16:15.890 --rc geninfo_all_blocks=1 00:16:15.890 --rc geninfo_unexecuted_blocks=1 00:16:15.890 00:16:15.890 ' 00:16:15.890 10:03:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:15.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.890 --rc genhtml_branch_coverage=1 00:16:15.890 --rc genhtml_function_coverage=1 00:16:15.890 --rc genhtml_legend=1 00:16:15.890 --rc geninfo_all_blocks=1 00:16:15.890 --rc geninfo_unexecuted_blocks=1 00:16:15.890 00:16:15.890 ' 00:16:15.890 10:03:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:15.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.890 --rc genhtml_branch_coverage=1 00:16:15.890 --rc genhtml_function_coverage=1 00:16:15.890 --rc genhtml_legend=1 00:16:15.890 --rc geninfo_all_blocks=1 00:16:15.890 --rc geninfo_unexecuted_blocks=1 00:16:15.890 00:16:15.890 ' 00:16:15.890 10:03:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:15.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.890 --rc genhtml_branch_coverage=1 00:16:15.890 --rc genhtml_function_coverage=1 00:16:15.890 --rc genhtml_legend=1 00:16:15.890 --rc geninfo_all_blocks=1 00:16:15.890 --rc geninfo_unexecuted_blocks=1 00:16:15.890 00:16:15.890 ' 00:16:15.890 10:03:14 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:15.890 10:03:14 -- nvmf/common.sh@7 -- # uname -s 00:16:15.890 10:03:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.890 10:03:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.890 10:03:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.890 10:03:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.890 10:03:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.890 10:03:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.890 10:03:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.890 10:03:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.890 10:03:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.890 10:03:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.890 10:03:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:16:15.890 10:03:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:16:15.890 10:03:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.890 10:03:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.890 10:03:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:15.890 10:03:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:15.890 10:03:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.890 10:03:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.890 10:03:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.890 10:03:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.890 10:03:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.890 10:03:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.890 10:03:14 -- paths/export.sh@5 -- # export PATH 00:16:15.890 10:03:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.890 10:03:14 -- nvmf/common.sh@46 -- # : 0 00:16:15.890 10:03:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:15.890 10:03:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:15.890 10:03:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:15.890 10:03:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.890 10:03:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.890 10:03:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:15.890 10:03:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:15.890 10:03:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:15.890 10:03:14 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:15.890 10:03:14 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:15.890 10:03:14 -- target/nmic.sh@14 -- # nvmftestinit 00:16:15.890 10:03:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:15.890 10:03:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.890 10:03:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:15.890 10:03:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:15.890 10:03:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:15.890 10:03:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.890 10:03:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.890 10:03:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.890 10:03:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:15.890 10:03:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:15.890 10:03:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:15.890 10:03:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:15.890 10:03:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:15.890 10:03:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:15.890 10:03:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.890 10:03:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.890 10:03:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:15.890 10:03:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:15.890 10:03:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:15.890 10:03:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:15.890 10:03:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:15.890 10:03:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.890 10:03:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:15.890 10:03:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:15.890 10:03:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:15.890 10:03:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:15.890 10:03:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:15.890 10:03:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:15.890 Cannot find device "nvmf_tgt_br" 00:16:15.890 10:03:14 -- nvmf/common.sh@154 -- # true 00:16:15.890 10:03:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:15.890 Cannot find device "nvmf_tgt_br2" 00:16:15.890 10:03:14 -- nvmf/common.sh@155 -- # true 00:16:15.890 10:03:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:15.890 10:03:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:15.890 Cannot find device "nvmf_tgt_br" 00:16:15.890 10:03:14 -- nvmf/common.sh@157 -- # true 00:16:15.890 10:03:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:15.890 Cannot find device "nvmf_tgt_br2" 00:16:15.890 10:03:14 -- nvmf/common.sh@158 -- # true 00:16:15.890 10:03:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:15.890 10:03:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:15.890 10:03:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:15.890 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.890 10:03:14 -- nvmf/common.sh@161 -- # true 00:16:15.890 10:03:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:15.890 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.890 10:03:14 -- nvmf/common.sh@162 -- # true 00:16:15.890 10:03:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:15.890 10:03:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:15.890 10:03:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:15.890 10:03:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:15.890 10:03:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:16.149 10:03:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:16.149 10:03:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:16.149 10:03:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:16.149 10:03:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:16.149 10:03:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:16.149 10:03:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:16.149 10:03:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:16.149 10:03:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:16.149 10:03:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:16.149 10:03:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:16.149 10:03:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:16.149 10:03:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:16.149 10:03:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:16.149 10:03:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:16.149 10:03:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:16.149 10:03:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:16.149 10:03:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:16.149 10:03:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:16.149 10:03:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:16.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:16:16.149 00:16:16.149 --- 10.0.0.2 ping statistics --- 00:16:16.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.149 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:16.149 10:03:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:16.149 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:16.149 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:16.149 00:16:16.149 --- 10.0.0.3 ping statistics --- 00:16:16.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.149 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:16.149 10:03:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:16.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:16.149 00:16:16.149 --- 10.0.0.1 ping statistics --- 00:16:16.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.149 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:16.149 10:03:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.149 10:03:14 -- nvmf/common.sh@421 -- # return 0 00:16:16.149 10:03:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:16.149 10:03:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.149 10:03:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:16.149 10:03:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:16.149 10:03:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.149 10:03:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:16.149 10:03:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:16.149 10:03:14 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:16.149 10:03:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:16.149 10:03:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:16.149 10:03:14 -- common/autotest_common.sh@10 -- # set +x 00:16:16.149 10:03:14 -- nvmf/common.sh@469 -- # nvmfpid=86637 00:16:16.149 10:03:14 -- nvmf/common.sh@470 -- # waitforlisten 86637 00:16:16.149 10:03:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:16.149 10:03:14 -- common/autotest_common.sh@829 -- # '[' -z 86637 ']' 00:16:16.149 10:03:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.149 10:03:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.149 10:03:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.149 10:03:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.149 10:03:14 -- common/autotest_common.sh@10 -- # set +x 00:16:16.149 [2024-12-16 10:03:14.727688] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:16.149 [2024-12-16 10:03:14.727781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.408 [2024-12-16 10:03:14.867307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.408 [2024-12-16 10:03:14.932799] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:16.408 [2024-12-16 10:03:14.932927] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.408 [2024-12-16 10:03:14.932938] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.408 [2024-12-16 10:03:14.932945] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.408 [2024-12-16 10:03:14.933093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.408 [2024-12-16 10:03:14.933349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.408 [2024-12-16 10:03:14.933936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.408 [2024-12-16 10:03:14.933954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.343 10:03:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.343 10:03:15 -- common/autotest_common.sh@862 -- # return 0 00:16:17.343 10:03:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:17.343 10:03:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:17.343 10:03:15 -- common/autotest_common.sh@10 -- # set +x 00:16:17.343 10:03:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.343 10:03:15 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:17.343 10:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.343 10:03:15 -- common/autotest_common.sh@10 -- # set +x 00:16:17.343 [2024-12-16 10:03:15.681568] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.343 10:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.343 10:03:15 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:17.343 10:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.343 10:03:15 -- common/autotest_common.sh@10 -- # set +x 00:16:17.343 Malloc0 00:16:17.343 10:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.343 10:03:15 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:17.343 10:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.343 10:03:15 -- common/autotest_common.sh@10 -- # set +x 00:16:17.343 10:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.343 10:03:15 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:17.343 10:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.343 10:03:15 -- common/autotest_common.sh@10 -- # set +x 00:16:17.343 10:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.343 10:03:15 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.343 10:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.343 10:03:15 -- common/autotest_common.sh@10 -- # set +x 00:16:17.343 [2024-12-16 10:03:15.751144] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.343 test case1: single bdev can't be used in multiple subsystems 00:16:17.343 10:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.343 10:03:15 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:17.343 10:03:15 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:17.343 10:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.343 10:03:15 -- common/autotest_common.sh@10 -- # set +x 00:16:17.343 10:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.343 10:03:15 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:17.343 10:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.343 10:03:15 -- common/autotest_common.sh@10 -- # set +x 00:16:17.343 10:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.343 10:03:15 -- target/nmic.sh@28 -- # nmic_status=0 00:16:17.343 10:03:15 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:17.343 10:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.343 10:03:15 -- common/autotest_common.sh@10 -- # set +x 00:16:17.343 [2024-12-16 10:03:15.774962] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:17.343 [2024-12-16 10:03:15.775166] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:17.343 [2024-12-16 10:03:15.775182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:17.343 2024/12/16 10:03:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:17.343 request: 00:16:17.343 { 00:16:17.343 "method": "nvmf_subsystem_add_ns", 00:16:17.343 "params": { 00:16:17.343 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:17.343 "namespace": { 00:16:17.343 "bdev_name": "Malloc0" 00:16:17.343 } 00:16:17.343 } 00:16:17.343 } 00:16:17.343 Got JSON-RPC error response 00:16:17.343 GoRPCClient: error on JSON-RPC call 00:16:17.343 Adding namespace failed - expected result. 00:16:17.343 test case2: host connect to nvmf target in multiple paths 00:16:17.343 10:03:15 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:17.343 10:03:15 -- target/nmic.sh@29 -- # nmic_status=1 00:16:17.343 10:03:15 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:17.343 10:03:15 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:17.343 10:03:15 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:17.343 10:03:15 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:17.343 10:03:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.343 10:03:15 -- common/autotest_common.sh@10 -- # set +x 00:16:17.343 [2024-12-16 10:03:15.787110] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:17.343 10:03:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.343 10:03:15 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:17.601 10:03:15 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:17.601 10:03:16 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:17.601 10:03:16 -- common/autotest_common.sh@1187 -- # local i=0 00:16:17.601 10:03:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.601 10:03:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:17.601 10:03:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:20.131 10:03:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:20.131 10:03:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:20.131 10:03:18 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.131 10:03:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:20.131 10:03:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.131 10:03:18 -- common/autotest_common.sh@1197 -- # return 0 00:16:20.131 10:03:18 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:20.131 [global] 00:16:20.131 thread=1 00:16:20.131 invalidate=1 00:16:20.131 rw=write 00:16:20.131 time_based=1 00:16:20.131 runtime=1 00:16:20.131 ioengine=libaio 00:16:20.131 direct=1 00:16:20.131 bs=4096 00:16:20.131 iodepth=1 00:16:20.131 norandommap=0 00:16:20.131 numjobs=1 00:16:20.131 00:16:20.131 verify_dump=1 00:16:20.131 verify_backlog=512 00:16:20.131 verify_state_save=0 00:16:20.131 do_verify=1 00:16:20.131 verify=crc32c-intel 00:16:20.131 [job0] 00:16:20.131 filename=/dev/nvme0n1 00:16:20.131 Could not set queue depth (nvme0n1) 00:16:20.131 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:20.131 fio-3.35 00:16:20.131 Starting 1 thread 00:16:21.066 00:16:21.066 job0: (groupid=0, jobs=1): err= 0: pid=86752: Mon Dec 16 10:03:19 2024 00:16:21.066 read: IOPS=3584, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1000msec) 00:16:21.066 slat (nsec): min=11331, max=49646, avg=14065.67, stdev=3997.00 00:16:21.066 clat (usec): min=116, max=510, avg=136.89, stdev=15.03 00:16:21.066 lat (usec): min=128, max=524, avg=150.95, stdev=15.60 00:16:21.066 clat percentiles (usec): 00:16:21.066 | 1.00th=[ 120], 5.00th=[ 123], 10.00th=[ 124], 20.00th=[ 127], 00:16:21.066 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 137], 00:16:21.066 | 70.00th=[ 141], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 163], 00:16:21.066 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 215], 99.95th=[ 359], 00:16:21.066 | 99.99th=[ 510] 00:16:21.066 write: IOPS=3686, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1000msec); 0 zone resets 00:16:21.066 slat (usec): min=17, max=108, avg=21.61, stdev= 6.25 00:16:21.066 clat (usec): min=83, max=174, avg=99.98, stdev=11.74 00:16:21.066 lat (usec): min=100, max=283, avg=121.59, stdev=13.91 00:16:21.066 clat percentiles (usec): 00:16:21.066 | 1.00th=[ 86], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 92], 00:16:21.066 | 30.00th=[ 93], 40.00th=[ 95], 50.00th=[ 96], 60.00th=[ 98], 00:16:21.066 | 70.00th=[ 102], 80.00th=[ 110], 90.00th=[ 119], 95.00th=[ 125], 00:16:21.066 | 99.00th=[ 139], 99.50th=[ 145], 99.90th=[ 159], 99.95th=[ 174], 00:16:21.066 | 99.99th=[ 176] 00:16:21.066 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:16:21.066 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:16:21.066 lat (usec) : 100=33.59%, 250=66.38%, 500=0.01%, 750=0.01% 00:16:21.066 cpu : usr=3.00%, sys=9.00%, ctx=7270, majf=0, minf=5 00:16:21.066 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:21.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.066 issued rwts: total=3584,3686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.066 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:21.066 00:16:21.066 Run status group 0 (all jobs): 00:16:21.066 READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1000-1000msec 00:16:21.066 WRITE: bw=14.4MiB/s (15.1MB/s), 14.4MiB/s-14.4MiB/s (15.1MB/s-15.1MB/s), io=14.4MiB (15.1MB), run=1000-1000msec 00:16:21.066 00:16:21.066 Disk stats (read/write): 00:16:21.066 nvme0n1: ios=3122/3514, merge=0/0, ticks=465/399, in_queue=864, util=91.08% 00:16:21.066 10:03:19 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:21.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:21.066 10:03:19 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:21.066 10:03:19 -- common/autotest_common.sh@1208 -- # local i=0 00:16:21.066 10:03:19 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:21.066 10:03:19 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.066 10:03:19 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:21.066 10:03:19 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.066 10:03:19 -- common/autotest_common.sh@1220 -- # return 0 00:16:21.066 10:03:19 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:21.066 10:03:19 -- target/nmic.sh@53 -- # nvmftestfini 00:16:21.066 10:03:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:21.066 10:03:19 -- nvmf/common.sh@116 -- # sync 00:16:21.324 10:03:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:21.324 10:03:19 -- nvmf/common.sh@119 -- # set +e 00:16:21.324 10:03:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:21.324 10:03:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:21.324 rmmod nvme_tcp 00:16:21.324 rmmod nvme_fabrics 00:16:21.324 rmmod nvme_keyring 00:16:21.324 10:03:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:21.324 10:03:19 -- nvmf/common.sh@123 -- # set -e 00:16:21.324 10:03:19 -- nvmf/common.sh@124 -- # return 0 00:16:21.324 10:03:19 -- nvmf/common.sh@477 -- # '[' -n 86637 ']' 00:16:21.324 10:03:19 -- nvmf/common.sh@478 -- # killprocess 86637 00:16:21.324 10:03:19 -- common/autotest_common.sh@936 -- # '[' -z 86637 ']' 00:16:21.324 10:03:19 -- common/autotest_common.sh@940 -- # kill -0 86637 00:16:21.324 10:03:19 -- common/autotest_common.sh@941 -- # uname 00:16:21.324 10:03:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:21.324 10:03:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86637 00:16:21.324 10:03:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:21.324 10:03:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:21.324 killing process with pid 86637 00:16:21.324 10:03:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86637' 00:16:21.324 10:03:19 -- common/autotest_common.sh@955 -- # kill 86637 00:16:21.324 10:03:19 -- common/autotest_common.sh@960 -- # wait 86637 00:16:21.583 10:03:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:21.583 10:03:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:21.583 10:03:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:21.583 10:03:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.583 10:03:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:21.583 10:03:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.583 10:03:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.583 10:03:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.583 10:03:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:21.583 00:16:21.583 real 0m5.891s 00:16:21.583 user 0m19.848s 00:16:21.583 sys 0m1.416s 00:16:21.583 10:03:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:21.583 ************************************ 00:16:21.583 10:03:20 -- common/autotest_common.sh@10 -- # set +x 00:16:21.583 END TEST nvmf_nmic 00:16:21.583 ************************************ 00:16:21.583 10:03:20 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:21.583 10:03:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:21.583 10:03:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:21.583 10:03:20 -- common/autotest_common.sh@10 -- # set +x 00:16:21.583 ************************************ 00:16:21.583 START TEST nvmf_fio_target 00:16:21.583 ************************************ 00:16:21.583 10:03:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:21.583 * Looking for test storage... 00:16:21.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:21.583 10:03:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:21.583 10:03:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:21.583 10:03:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:21.842 10:03:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:21.842 10:03:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:21.842 10:03:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:21.842 10:03:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:21.842 10:03:20 -- scripts/common.sh@335 -- # IFS=.-: 00:16:21.842 10:03:20 -- scripts/common.sh@335 -- # read -ra ver1 00:16:21.842 10:03:20 -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.842 10:03:20 -- scripts/common.sh@336 -- # read -ra ver2 00:16:21.842 10:03:20 -- scripts/common.sh@337 -- # local 'op=<' 00:16:21.842 10:03:20 -- scripts/common.sh@339 -- # ver1_l=2 00:16:21.842 10:03:20 -- scripts/common.sh@340 -- # ver2_l=1 00:16:21.842 10:03:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:21.842 10:03:20 -- scripts/common.sh@343 -- # case "$op" in 00:16:21.842 10:03:20 -- scripts/common.sh@344 -- # : 1 00:16:21.842 10:03:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:21.842 10:03:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.842 10:03:20 -- scripts/common.sh@364 -- # decimal 1 00:16:21.842 10:03:20 -- scripts/common.sh@352 -- # local d=1 00:16:21.842 10:03:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.842 10:03:20 -- scripts/common.sh@354 -- # echo 1 00:16:21.842 10:03:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:21.842 10:03:20 -- scripts/common.sh@365 -- # decimal 2 00:16:21.842 10:03:20 -- scripts/common.sh@352 -- # local d=2 00:16:21.842 10:03:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.842 10:03:20 -- scripts/common.sh@354 -- # echo 2 00:16:21.842 10:03:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:21.842 10:03:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:21.842 10:03:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:21.842 10:03:20 -- scripts/common.sh@367 -- # return 0 00:16:21.842 10:03:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.842 10:03:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:21.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.842 --rc genhtml_branch_coverage=1 00:16:21.842 --rc genhtml_function_coverage=1 00:16:21.842 --rc genhtml_legend=1 00:16:21.842 --rc geninfo_all_blocks=1 00:16:21.842 --rc geninfo_unexecuted_blocks=1 00:16:21.842 00:16:21.842 ' 00:16:21.842 10:03:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:21.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.842 --rc genhtml_branch_coverage=1 00:16:21.842 --rc genhtml_function_coverage=1 00:16:21.842 --rc genhtml_legend=1 00:16:21.842 --rc geninfo_all_blocks=1 00:16:21.842 --rc geninfo_unexecuted_blocks=1 00:16:21.842 00:16:21.842 ' 00:16:21.843 10:03:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:21.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.843 --rc genhtml_branch_coverage=1 00:16:21.843 --rc genhtml_function_coverage=1 00:16:21.843 --rc genhtml_legend=1 00:16:21.843 --rc geninfo_all_blocks=1 00:16:21.843 --rc geninfo_unexecuted_blocks=1 00:16:21.843 00:16:21.843 ' 00:16:21.843 10:03:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:21.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.843 --rc genhtml_branch_coverage=1 00:16:21.843 --rc genhtml_function_coverage=1 00:16:21.843 --rc genhtml_legend=1 00:16:21.843 --rc geninfo_all_blocks=1 00:16:21.843 --rc geninfo_unexecuted_blocks=1 00:16:21.843 00:16:21.843 ' 00:16:21.843 10:03:20 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:21.843 10:03:20 -- nvmf/common.sh@7 -- # uname -s 00:16:21.843 10:03:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.843 10:03:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.843 10:03:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.843 10:03:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.843 10:03:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.843 10:03:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.843 10:03:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.843 10:03:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.843 10:03:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.843 10:03:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.843 10:03:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:16:21.843 10:03:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:16:21.843 10:03:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.843 10:03:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.843 10:03:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:21.843 10:03:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:21.843 10:03:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.843 10:03:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.843 10:03:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.843 10:03:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.843 10:03:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.843 10:03:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.843 10:03:20 -- paths/export.sh@5 -- # export PATH 00:16:21.843 10:03:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.843 10:03:20 -- nvmf/common.sh@46 -- # : 0 00:16:21.843 10:03:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:21.843 10:03:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:21.843 10:03:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:21.843 10:03:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.843 10:03:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.843 10:03:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:21.843 10:03:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:21.843 10:03:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:21.843 10:03:20 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:21.843 10:03:20 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:21.843 10:03:20 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:21.843 10:03:20 -- target/fio.sh@16 -- # nvmftestinit 00:16:21.843 10:03:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:21.843 10:03:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.843 10:03:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:21.843 10:03:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:21.843 10:03:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:21.843 10:03:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.843 10:03:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.843 10:03:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.843 10:03:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:21.843 10:03:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:21.843 10:03:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:21.843 10:03:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:21.843 10:03:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:21.843 10:03:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:21.843 10:03:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.843 10:03:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.843 10:03:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:21.843 10:03:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:21.843 10:03:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:21.843 10:03:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:21.843 10:03:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:21.843 10:03:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.843 10:03:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:21.843 10:03:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:21.843 10:03:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:21.843 10:03:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:21.843 10:03:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:21.843 10:03:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:21.843 Cannot find device "nvmf_tgt_br" 00:16:21.843 10:03:20 -- nvmf/common.sh@154 -- # true 00:16:21.843 10:03:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:21.843 Cannot find device "nvmf_tgt_br2" 00:16:21.843 10:03:20 -- nvmf/common.sh@155 -- # true 00:16:21.843 10:03:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:21.843 10:03:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:21.843 Cannot find device "nvmf_tgt_br" 00:16:21.843 10:03:20 -- nvmf/common.sh@157 -- # true 00:16:21.843 10:03:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:21.843 Cannot find device "nvmf_tgt_br2" 00:16:21.843 10:03:20 -- nvmf/common.sh@158 -- # true 00:16:21.843 10:03:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:21.843 10:03:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:21.843 10:03:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:21.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.843 10:03:20 -- nvmf/common.sh@161 -- # true 00:16:21.843 10:03:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:21.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.843 10:03:20 -- nvmf/common.sh@162 -- # true 00:16:21.843 10:03:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:21.843 10:03:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:21.843 10:03:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:22.102 10:03:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:22.102 10:03:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:22.102 10:03:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:22.102 10:03:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:22.102 10:03:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:22.102 10:03:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:22.102 10:03:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:22.102 10:03:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:22.102 10:03:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:22.102 10:03:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:22.102 10:03:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:22.102 10:03:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:22.102 10:03:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:22.102 10:03:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:22.102 10:03:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:22.102 10:03:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:22.102 10:03:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:22.102 10:03:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:22.102 10:03:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:22.102 10:03:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:22.102 10:03:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:22.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:16:22.102 00:16:22.102 --- 10.0.0.2 ping statistics --- 00:16:22.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.102 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:22.102 10:03:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:22.102 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:22.102 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:22.102 00:16:22.102 --- 10.0.0.3 ping statistics --- 00:16:22.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.102 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:22.102 10:03:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:22.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:16:22.102 00:16:22.102 --- 10.0.0.1 ping statistics --- 00:16:22.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.102 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:22.102 10:03:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.102 10:03:20 -- nvmf/common.sh@421 -- # return 0 00:16:22.102 10:03:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:22.102 10:03:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.102 10:03:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:22.102 10:03:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:22.102 10:03:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.102 10:03:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:22.102 10:03:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:22.102 10:03:20 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:22.102 10:03:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:22.102 10:03:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:22.102 10:03:20 -- common/autotest_common.sh@10 -- # set +x 00:16:22.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.102 10:03:20 -- nvmf/common.sh@469 -- # nvmfpid=86936 00:16:22.102 10:03:20 -- nvmf/common.sh@470 -- # waitforlisten 86936 00:16:22.102 10:03:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:22.102 10:03:20 -- common/autotest_common.sh@829 -- # '[' -z 86936 ']' 00:16:22.102 10:03:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.102 10:03:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.102 10:03:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.102 10:03:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.102 10:03:20 -- common/autotest_common.sh@10 -- # set +x 00:16:22.102 [2024-12-16 10:03:20.725145] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:22.361 [2024-12-16 10:03:20.725415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.361 [2024-12-16 10:03:20.864403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.361 [2024-12-16 10:03:20.922341] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:22.361 [2024-12-16 10:03:20.922497] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.361 [2024-12-16 10:03:20.922510] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.361 [2024-12-16 10:03:20.922535] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.361 [2024-12-16 10:03:20.922638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.361 [2024-12-16 10:03:20.922738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.361 [2024-12-16 10:03:20.924374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.361 [2024-12-16 10:03:20.924404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.295 10:03:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.295 10:03:21 -- common/autotest_common.sh@862 -- # return 0 00:16:23.295 10:03:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:23.296 10:03:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:23.296 10:03:21 -- common/autotest_common.sh@10 -- # set +x 00:16:23.296 10:03:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.296 10:03:21 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:23.554 [2024-12-16 10:03:21.934041] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.554 10:03:21 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:23.814 10:03:22 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:23.814 10:03:22 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:24.073 10:03:22 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:24.073 10:03:22 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:24.332 10:03:22 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:24.332 10:03:22 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:24.590 10:03:23 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:24.590 10:03:23 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:24.849 10:03:23 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.108 10:03:23 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:25.108 10:03:23 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.366 10:03:23 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:25.366 10:03:23 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.637 10:03:24 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:25.637 10:03:24 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:25.922 10:03:24 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:26.180 10:03:24 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:26.180 10:03:24 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:26.438 10:03:24 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:26.438 10:03:24 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:26.695 10:03:25 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.953 [2024-12-16 10:03:25.364738] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.953 10:03:25 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:27.211 10:03:25 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:27.211 10:03:25 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:27.469 10:03:25 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:27.469 10:03:25 -- common/autotest_common.sh@1187 -- # local i=0 00:16:27.469 10:03:25 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:27.469 10:03:25 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:27.469 10:03:25 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:27.469 10:03:25 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:29.372 10:03:27 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:29.372 10:03:27 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:29.372 10:03:27 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:29.631 10:03:28 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:29.631 10:03:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:29.631 10:03:28 -- common/autotest_common.sh@1197 -- # return 0 00:16:29.631 10:03:28 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:29.631 [global] 00:16:29.631 thread=1 00:16:29.631 invalidate=1 00:16:29.631 rw=write 00:16:29.631 time_based=1 00:16:29.631 runtime=1 00:16:29.631 ioengine=libaio 00:16:29.631 direct=1 00:16:29.631 bs=4096 00:16:29.631 iodepth=1 00:16:29.631 norandommap=0 00:16:29.631 numjobs=1 00:16:29.631 00:16:29.631 verify_dump=1 00:16:29.631 verify_backlog=512 00:16:29.631 verify_state_save=0 00:16:29.631 do_verify=1 00:16:29.631 verify=crc32c-intel 00:16:29.631 [job0] 00:16:29.631 filename=/dev/nvme0n1 00:16:29.631 [job1] 00:16:29.631 filename=/dev/nvme0n2 00:16:29.631 [job2] 00:16:29.631 filename=/dev/nvme0n3 00:16:29.631 [job3] 00:16:29.631 filename=/dev/nvme0n4 00:16:29.631 Could not set queue depth (nvme0n1) 00:16:29.631 Could not set queue depth (nvme0n2) 00:16:29.631 Could not set queue depth (nvme0n3) 00:16:29.631 Could not set queue depth (nvme0n4) 00:16:29.631 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.631 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.631 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.631 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.631 fio-3.35 00:16:29.631 Starting 4 threads 00:16:31.007 00:16:31.007 job0: (groupid=0, jobs=1): err= 0: pid=87229: Mon Dec 16 10:03:29 2024 00:16:31.007 read: IOPS=2953, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec) 00:16:31.007 slat (nsec): min=16404, max=54664, avg=19685.87, stdev=4922.03 00:16:31.007 clat (usec): min=126, max=3054, avg=156.50, stdev=56.04 00:16:31.007 lat (usec): min=143, max=3075, avg=176.19, stdev=56.27 00:16:31.007 clat percentiles (usec): 00:16:31.007 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:16:31.007 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:16:31.007 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 178], 95.00th=[ 184], 00:16:31.007 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 424], 99.95th=[ 510], 00:16:31.007 | 99.99th=[ 3064] 00:16:31.007 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:31.007 slat (usec): min=23, max=103, avg=29.14, stdev= 7.39 00:16:31.007 clat (usec): min=94, max=216, avg=122.94, stdev=14.54 00:16:31.007 lat (usec): min=119, max=262, avg=152.08, stdev=16.85 00:16:31.008 clat percentiles (usec): 00:16:31.008 | 1.00th=[ 100], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 111], 00:16:31.008 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 124], 00:16:31.008 | 70.00th=[ 129], 80.00th=[ 135], 90.00th=[ 143], 95.00th=[ 151], 00:16:31.008 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 184], 99.95th=[ 192], 00:16:31.008 | 99.99th=[ 217] 00:16:31.008 bw ( KiB/s): min=12288, max=12288, per=29.41%, avg=12288.00, stdev= 0.00, samples=1 00:16:31.008 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:31.008 lat (usec) : 100=0.60%, 250=99.35%, 500=0.02%, 750=0.02% 00:16:31.008 lat (msec) : 4=0.02% 00:16:31.008 cpu : usr=3.10%, sys=10.50%, ctx=6028, majf=0, minf=5 00:16:31.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:31.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.008 issued rwts: total=2956,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:31.008 job1: (groupid=0, jobs=1): err= 0: pid=87230: Mon Dec 16 10:03:29 2024 00:16:31.008 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:31.008 slat (nsec): min=12130, max=61333, avg=15071.58, stdev=4057.39 00:16:31.008 clat (usec): min=119, max=951, avg=152.87, stdev=23.09 00:16:31.008 lat (usec): min=133, max=978, avg=167.94, stdev=23.71 00:16:31.008 clat percentiles (usec): 00:16:31.008 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:16:31.008 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:16:31.008 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 180], 00:16:31.008 | 99.00th=[ 196], 99.50th=[ 237], 99.90th=[ 404], 99.95th=[ 441], 00:16:31.008 | 99.99th=[ 955] 00:16:31.008 write: IOPS=3284, BW=12.8MiB/s (13.5MB/s)(12.8MiB/1001msec); 0 zone resets 00:16:31.008 slat (nsec): min=18022, max=88111, avg=22564.63, stdev=6049.14 00:16:31.008 clat (usec): min=90, max=640, avg=121.46, stdev=19.80 00:16:31.008 lat (usec): min=109, max=664, avg=144.02, stdev=21.17 00:16:31.008 clat percentiles (usec): 00:16:31.008 | 1.00th=[ 98], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 110], 00:16:31.008 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 122], 00:16:31.008 | 70.00th=[ 126], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 149], 00:16:31.008 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 351], 99.95th=[ 375], 00:16:31.008 | 99.99th=[ 644] 00:16:31.008 bw ( KiB/s): min=12616, max=12616, per=30.19%, avg=12616.00, stdev= 0.00, samples=1 00:16:31.008 iops : min= 3154, max= 3154, avg=3154.00, stdev= 0.00, samples=1 00:16:31.008 lat (usec) : 100=1.31%, 250=98.32%, 500=0.35%, 750=0.02%, 1000=0.02% 00:16:31.008 cpu : usr=2.40%, sys=8.90%, ctx=6361, majf=0, minf=13 00:16:31.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:31.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.008 issued rwts: total=3072,3288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:31.008 job2: (groupid=0, jobs=1): err= 0: pid=87231: Mon Dec 16 10:03:29 2024 00:16:31.008 read: IOPS=1641, BW=6565KiB/s (6723kB/s)(6572KiB/1001msec) 00:16:31.008 slat (nsec): min=10196, max=59420, avg=14732.55, stdev=4685.97 00:16:31.008 clat (usec): min=135, max=381, avg=276.42, stdev=20.06 00:16:31.008 lat (usec): min=160, max=397, avg=291.16, stdev=20.58 00:16:31.008 clat percentiles (usec): 00:16:31.008 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 260], 00:16:31.008 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:16:31.008 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:16:31.008 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 383], 00:16:31.008 | 99.99th=[ 383] 00:16:31.008 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:31.008 slat (nsec): min=15045, max=87590, avg=22313.94, stdev=6540.15 00:16:31.008 clat (usec): min=104, max=2373, avg=229.68, stdev=63.14 00:16:31.008 lat (usec): min=135, max=2397, avg=252.00, stdev=63.22 00:16:31.008 clat percentiles (usec): 00:16:31.008 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 210], 00:16:31.008 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:16:31.008 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:16:31.008 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 881], 99.95th=[ 1663], 00:16:31.008 | 99.99th=[ 2376] 00:16:31.008 bw ( KiB/s): min= 8192, max= 8192, per=19.61%, avg=8192.00, stdev= 0.00, samples=1 00:16:31.008 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:31.008 lat (usec) : 250=51.67%, 500=48.23%, 750=0.03%, 1000=0.03% 00:16:31.008 lat (msec) : 2=0.03%, 4=0.03% 00:16:31.008 cpu : usr=0.90%, sys=5.60%, ctx=3694, majf=0, minf=7 00:16:31.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:31.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.008 issued rwts: total=1643,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:31.008 job3: (groupid=0, jobs=1): err= 0: pid=87232: Mon Dec 16 10:03:29 2024 00:16:31.008 read: IOPS=1642, BW=6568KiB/s (6726kB/s)(6568KiB/1000msec) 00:16:31.008 slat (nsec): min=9919, max=47317, avg=12501.56, stdev=3453.36 00:16:31.008 clat (usec): min=196, max=381, avg=278.75, stdev=20.36 00:16:31.008 lat (usec): min=220, max=399, avg=291.25, stdev=20.78 00:16:31.008 clat percentiles (usec): 00:16:31.008 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 262], 00:16:31.008 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:16:31.008 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 314], 00:16:31.008 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 379], 99.95th=[ 383], 00:16:31.008 | 99.99th=[ 383] 00:16:31.008 write: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec); 0 zone resets 00:16:31.008 slat (usec): min=14, max=103, avg=21.79, stdev= 5.56 00:16:31.008 clat (usec): min=109, max=2319, avg=230.13, stdev=62.56 00:16:31.008 lat (usec): min=130, max=2343, avg=251.92, stdev=62.69 00:16:31.008 clat percentiles (usec): 00:16:31.008 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 210], 00:16:31.008 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:16:31.008 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:16:31.008 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 816], 99.95th=[ 1745], 00:16:31.008 | 99.99th=[ 2311] 00:16:31.008 bw ( KiB/s): min= 8192, max= 8192, per=19.61%, avg=8192.00, stdev= 0.00, samples=1 00:16:31.008 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:31.008 lat (usec) : 250=50.60%, 500=49.30%, 750=0.03%, 1000=0.03% 00:16:31.008 lat (msec) : 2=0.03%, 4=0.03% 00:16:31.008 cpu : usr=1.80%, sys=4.70%, ctx=3693, majf=0, minf=13 00:16:31.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:31.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.008 issued rwts: total=1642,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:31.008 00:16:31.008 Run status group 0 (all jobs): 00:16:31.008 READ: bw=36.3MiB/s (38.1MB/s), 6565KiB/s-12.0MiB/s (6723kB/s-12.6MB/s), io=36.4MiB (38.1MB), run=1000-1001msec 00:16:31.008 WRITE: bw=40.8MiB/s (42.8MB/s), 8184KiB/s-12.8MiB/s (8380kB/s-13.5MB/s), io=40.8MiB (42.8MB), run=1000-1001msec 00:16:31.008 00:16:31.008 Disk stats (read/write): 00:16:31.008 nvme0n1: ios=2610/2624, merge=0/0, ticks=445/350, in_queue=795, util=88.26% 00:16:31.008 nvme0n2: ios=2604/2919, merge=0/0, ticks=428/386, in_queue=814, util=88.86% 00:16:31.008 nvme0n3: ios=1536/1620, merge=0/0, ticks=425/379, in_queue=804, util=89.26% 00:16:31.008 nvme0n4: ios=1536/1620, merge=0/0, ticks=420/397, in_queue=817, util=89.81% 00:16:31.008 10:03:29 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:31.008 [global] 00:16:31.008 thread=1 00:16:31.008 invalidate=1 00:16:31.008 rw=randwrite 00:16:31.008 time_based=1 00:16:31.008 runtime=1 00:16:31.008 ioengine=libaio 00:16:31.008 direct=1 00:16:31.008 bs=4096 00:16:31.008 iodepth=1 00:16:31.008 norandommap=0 00:16:31.008 numjobs=1 00:16:31.008 00:16:31.008 verify_dump=1 00:16:31.008 verify_backlog=512 00:16:31.008 verify_state_save=0 00:16:31.008 do_verify=1 00:16:31.008 verify=crc32c-intel 00:16:31.008 [job0] 00:16:31.008 filename=/dev/nvme0n1 00:16:31.008 [job1] 00:16:31.008 filename=/dev/nvme0n2 00:16:31.008 [job2] 00:16:31.008 filename=/dev/nvme0n3 00:16:31.008 [job3] 00:16:31.008 filename=/dev/nvme0n4 00:16:31.008 Could not set queue depth (nvme0n1) 00:16:31.008 Could not set queue depth (nvme0n2) 00:16:31.008 Could not set queue depth (nvme0n3) 00:16:31.008 Could not set queue depth (nvme0n4) 00:16:31.008 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:31.008 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:31.008 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:31.008 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:31.008 fio-3.35 00:16:31.008 Starting 4 threads 00:16:32.386 00:16:32.386 job0: (groupid=0, jobs=1): err= 0: pid=87289: Mon Dec 16 10:03:30 2024 00:16:32.386 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:32.386 slat (nsec): min=11768, max=47506, avg=13544.63, stdev=2834.08 00:16:32.386 clat (usec): min=126, max=265, avg=153.24, stdev=12.17 00:16:32.386 lat (usec): min=138, max=278, avg=166.79, stdev=12.47 00:16:32.386 clat percentiles (usec): 00:16:32.386 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:16:32.386 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 153], 00:16:32.386 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 178], 00:16:32.386 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 215], 99.95th=[ 233], 00:16:32.386 | 99.99th=[ 265] 00:16:32.386 write: IOPS=3287, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1001msec); 0 zone resets 00:16:32.386 slat (usec): min=17, max=151, avg=20.22, stdev= 5.44 00:16:32.386 clat (usec): min=94, max=2532, avg=125.24, stdev=54.84 00:16:32.386 lat (usec): min=113, max=2552, avg=145.46, stdev=55.15 00:16:32.386 clat percentiles (usec): 00:16:32.386 | 1.00th=[ 103], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 115], 00:16:32.386 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 125], 00:16:32.386 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 147], 00:16:32.386 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 494], 99.95th=[ 1909], 00:16:32.386 | 99.99th=[ 2540] 00:16:32.386 bw ( KiB/s): min=13024, max=13024, per=26.06%, avg=13024.00, stdev= 0.00, samples=1 00:16:32.386 iops : min= 3256, max= 3256, avg=3256.00, stdev= 0.00, samples=1 00:16:32.386 lat (usec) : 100=0.25%, 250=99.64%, 500=0.06%, 750=0.02% 00:16:32.386 lat (msec) : 2=0.02%, 4=0.02% 00:16:32.386 cpu : usr=2.30%, sys=7.70%, ctx=6364, majf=0, minf=13 00:16:32.387 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.387 issued rwts: total=3072,3291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.387 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.387 job1: (groupid=0, jobs=1): err= 0: pid=87290: Mon Dec 16 10:03:30 2024 00:16:32.387 read: IOPS=2986, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1001msec) 00:16:32.387 slat (nsec): min=16541, max=54435, avg=18789.31, stdev=3367.04 00:16:32.387 clat (usec): min=126, max=572, avg=153.39, stdev=14.88 00:16:32.387 lat (usec): min=144, max=592, avg=172.18, stdev=15.32 00:16:32.387 clat percentiles (usec): 00:16:32.387 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:16:32.387 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 153], 00:16:32.387 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 178], 00:16:32.387 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 217], 99.95th=[ 453], 00:16:32.387 | 99.99th=[ 570] 00:16:32.387 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:32.387 slat (usec): min=23, max=110, avg=26.70, stdev= 6.00 00:16:32.387 clat (usec): min=98, max=253, avg=127.69, stdev=12.90 00:16:32.387 lat (usec): min=124, max=279, avg=154.39, stdev=14.34 00:16:32.387 clat percentiles (usec): 00:16:32.387 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 118], 00:16:32.387 | 30.00th=[ 121], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 129], 00:16:32.387 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 147], 95.00th=[ 153], 00:16:32.387 | 99.00th=[ 165], 99.50th=[ 167], 99.90th=[ 198], 99.95th=[ 229], 00:16:32.387 | 99.99th=[ 253] 00:16:32.387 bw ( KiB/s): min=12288, max=12288, per=24.59%, avg=12288.00, stdev= 0.00, samples=1 00:16:32.387 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:32.387 lat (usec) : 100=0.08%, 250=99.87%, 500=0.03%, 750=0.02% 00:16:32.387 cpu : usr=2.00%, sys=10.70%, ctx=6061, majf=0, minf=11 00:16:32.387 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.387 issued rwts: total=2989,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.387 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.387 job2: (groupid=0, jobs=1): err= 0: pid=87291: Mon Dec 16 10:03:30 2024 00:16:32.387 read: IOPS=2822, BW=11.0MiB/s (11.6MB/s)(11.0MiB/1001msec) 00:16:32.387 slat (nsec): min=12043, max=45453, avg=14651.30, stdev=3028.43 00:16:32.387 clat (usec): min=135, max=7128, avg=164.09, stdev=135.23 00:16:32.387 lat (usec): min=148, max=7144, avg=178.74, stdev=135.32 00:16:32.387 clat percentiles (usec): 00:16:32.387 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:16:32.387 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:16:32.387 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 186], 00:16:32.387 | 99.00th=[ 202], 99.50th=[ 208], 99.90th=[ 947], 99.95th=[ 1582], 00:16:32.387 | 99.99th=[ 7111] 00:16:32.387 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:32.387 slat (usec): min=17, max=163, avg=22.03, stdev= 5.63 00:16:32.387 clat (usec): min=105, max=2044, avg=136.15, stdev=41.28 00:16:32.387 lat (usec): min=125, max=2064, avg=158.17, stdev=41.79 00:16:32.387 clat percentiles (usec): 00:16:32.387 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 125], 00:16:32.387 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:16:32.387 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 161], 00:16:32.387 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 351], 99.95th=[ 1090], 00:16:32.387 | 99.99th=[ 2040] 00:16:32.387 bw ( KiB/s): min=12288, max=12288, per=24.59%, avg=12288.00, stdev= 0.00, samples=1 00:16:32.387 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:32.387 lat (usec) : 250=99.88%, 500=0.03%, 1000=0.02% 00:16:32.387 lat (msec) : 2=0.03%, 4=0.02%, 10=0.02% 00:16:32.387 cpu : usr=2.60%, sys=7.60%, ctx=5899, majf=0, minf=11 00:16:32.387 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.387 issued rwts: total=2825,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.387 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.387 job3: (groupid=0, jobs=1): err= 0: pid=87292: Mon Dec 16 10:03:30 2024 00:16:32.387 read: IOPS=2948, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec) 00:16:32.387 slat (nsec): min=12846, max=50014, avg=14481.25, stdev=2860.64 00:16:32.387 clat (usec): min=132, max=1772, avg=159.18, stdev=33.56 00:16:32.387 lat (usec): min=145, max=1788, avg=173.66, stdev=33.78 00:16:32.387 clat percentiles (usec): 00:16:32.387 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 149], 00:16:32.387 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:16:32.387 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:16:32.387 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 322], 99.95th=[ 676], 00:16:32.387 | 99.99th=[ 1778] 00:16:32.387 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:32.387 slat (nsec): min=18746, max=84632, avg=21264.97, stdev=5114.35 00:16:32.387 clat (usec): min=101, max=1681, avg=134.35, stdev=31.07 00:16:32.387 lat (usec): min=122, max=1701, avg=155.62, stdev=31.57 00:16:32.387 clat percentiles (usec): 00:16:32.387 | 1.00th=[ 111], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 124], 00:16:32.387 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:16:32.387 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 159], 00:16:32.387 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 241], 99.95th=[ 355], 00:16:32.387 | 99.99th=[ 1680] 00:16:32.387 bw ( KiB/s): min=12288, max=12288, per=24.59%, avg=12288.00, stdev= 0.00, samples=1 00:16:32.387 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:32.387 lat (usec) : 250=99.88%, 500=0.07%, 750=0.02% 00:16:32.387 lat (msec) : 2=0.03% 00:16:32.387 cpu : usr=2.10%, sys=7.90%, ctx=6023, majf=0, minf=11 00:16:32.387 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.387 issued rwts: total=2951,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.387 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.387 00:16:32.387 Run status group 0 (all jobs): 00:16:32.387 READ: bw=46.2MiB/s (48.4MB/s), 11.0MiB/s-12.0MiB/s (11.6MB/s-12.6MB/s), io=46.2MiB (48.5MB), run=1001-1001msec 00:16:32.387 WRITE: bw=48.8MiB/s (51.2MB/s), 12.0MiB/s-12.8MiB/s (12.6MB/s-13.5MB/s), io=48.9MiB (51.2MB), run=1001-1001msec 00:16:32.387 00:16:32.387 Disk stats (read/write): 00:16:32.387 nvme0n1: ios=2610/2939, merge=0/0, ticks=430/385, in_queue=815, util=87.88% 00:16:32.387 nvme0n2: ios=2592/2661, merge=0/0, ticks=423/369, in_queue=792, util=88.13% 00:16:32.387 nvme0n3: ios=2558/2560, merge=0/0, ticks=407/377, in_queue=784, util=89.42% 00:16:32.387 nvme0n4: ios=2560/2631, merge=0/0, ticks=425/384, in_queue=809, util=89.79% 00:16:32.387 10:03:30 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:32.387 [global] 00:16:32.387 thread=1 00:16:32.387 invalidate=1 00:16:32.387 rw=write 00:16:32.387 time_based=1 00:16:32.387 runtime=1 00:16:32.387 ioengine=libaio 00:16:32.387 direct=1 00:16:32.387 bs=4096 00:16:32.387 iodepth=128 00:16:32.387 norandommap=0 00:16:32.387 numjobs=1 00:16:32.387 00:16:32.387 verify_dump=1 00:16:32.387 verify_backlog=512 00:16:32.387 verify_state_save=0 00:16:32.387 do_verify=1 00:16:32.387 verify=crc32c-intel 00:16:32.387 [job0] 00:16:32.387 filename=/dev/nvme0n1 00:16:32.387 [job1] 00:16:32.387 filename=/dev/nvme0n2 00:16:32.387 [job2] 00:16:32.387 filename=/dev/nvme0n3 00:16:32.387 [job3] 00:16:32.387 filename=/dev/nvme0n4 00:16:32.387 Could not set queue depth (nvme0n1) 00:16:32.387 Could not set queue depth (nvme0n2) 00:16:32.387 Could not set queue depth (nvme0n3) 00:16:32.387 Could not set queue depth (nvme0n4) 00:16:32.387 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:32.387 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:32.387 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:32.387 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:32.387 fio-3.35 00:16:32.387 Starting 4 threads 00:16:33.764 00:16:33.764 job0: (groupid=0, jobs=1): err= 0: pid=87349: Mon Dec 16 10:03:32 2024 00:16:33.764 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:16:33.764 slat (usec): min=3, max=9679, avg=86.18, stdev=559.23 00:16:33.764 clat (usec): min=3992, max=29601, avg=11756.16, stdev=3711.30 00:16:33.764 lat (usec): min=4003, max=29619, avg=11842.34, stdev=3745.69 00:16:33.764 clat percentiles (usec): 00:16:33.764 | 1.00th=[ 7046], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 9372], 00:16:33.764 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[11338], 00:16:33.764 | 70.00th=[12256], 80.00th=[13042], 90.00th=[16909], 95.00th=[20317], 00:16:33.764 | 99.00th=[26084], 99.50th=[27919], 99.90th=[29492], 99.95th=[29492], 00:16:33.764 | 99.99th=[29492] 00:16:33.764 write: IOPS=5797, BW=22.6MiB/s (23.7MB/s)(22.7MiB/1003msec); 0 zone resets 00:16:33.764 slat (usec): min=5, max=8574, avg=81.66, stdev=565.93 00:16:33.764 clat (usec): min=1821, max=21868, avg=10458.22, stdev=1980.88 00:16:33.764 lat (usec): min=3692, max=23684, avg=10539.88, stdev=2058.06 00:16:33.764 clat percentiles (usec): 00:16:33.764 | 1.00th=[ 4293], 5.00th=[ 7177], 10.00th=[ 8455], 20.00th=[ 9634], 00:16:33.764 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10814], 00:16:33.764 | 70.00th=[11076], 80.00th=[11731], 90.00th=[12256], 95.00th=[12387], 00:16:33.764 | 99.00th=[18220], 99.50th=[19268], 99.90th=[20579], 99.95th=[20841], 00:16:33.764 | 99.99th=[21890] 00:16:33.764 bw ( KiB/s): min=20928, max=24576, per=33.85%, avg=22752.00, stdev=2579.53, samples=2 00:16:33.764 iops : min= 5232, max= 6144, avg=5688.00, stdev=644.88, samples=2 00:16:33.764 lat (msec) : 2=0.01%, 4=0.22%, 10=28.76%, 20=68.26%, 50=2.75% 00:16:33.764 cpu : usr=4.39%, sys=13.77%, ctx=652, majf=0, minf=13 00:16:33.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:33.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.764 issued rwts: total=5632,5815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.764 job1: (groupid=0, jobs=1): err= 0: pid=87354: Mon Dec 16 10:03:32 2024 00:16:33.764 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:16:33.764 slat (usec): min=2, max=8019, avg=174.30, stdev=708.94 00:16:33.764 clat (usec): min=12455, max=31958, avg=22746.12, stdev=2814.56 00:16:33.764 lat (usec): min=12472, max=32599, avg=22920.42, stdev=2782.14 00:16:33.764 clat percentiles (usec): 00:16:33.764 | 1.00th=[13698], 5.00th=[17171], 10.00th=[19530], 20.00th=[21103], 00:16:33.764 | 30.00th=[21890], 40.00th=[22414], 50.00th=[22938], 60.00th=[23462], 00:16:33.764 | 70.00th=[23987], 80.00th=[24773], 90.00th=[25822], 95.00th=[26870], 00:16:33.764 | 99.00th=[28967], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:16:33.764 | 99.99th=[31851] 00:16:33.764 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1002msec); 0 zone resets 00:16:33.764 slat (usec): min=4, max=6049, avg=172.52, stdev=722.95 00:16:33.764 clat (usec): min=1905, max=30476, avg=22128.86, stdev=4336.75 00:16:33.764 lat (usec): min=1990, max=31263, avg=22301.39, stdev=4331.45 00:16:33.764 clat percentiles (usec): 00:16:33.764 | 1.00th=[ 6783], 5.00th=[14091], 10.00th=[16712], 20.00th=[19268], 00:16:33.764 | 30.00th=[20317], 40.00th=[22938], 50.00th=[23725], 60.00th=[24249], 00:16:33.764 | 70.00th=[24773], 80.00th=[25297], 90.00th=[25822], 95.00th=[26346], 00:16:33.764 | 99.00th=[28181], 99.50th=[29492], 99.90th=[30278], 99.95th=[30278], 00:16:33.764 | 99.99th=[30540] 00:16:33.764 bw ( KiB/s): min=11224, max=12288, per=17.49%, avg=11756.00, stdev=752.36, samples=2 00:16:33.764 iops : min= 2806, max= 3072, avg=2939.00, stdev=188.09, samples=2 00:16:33.764 lat (msec) : 2=0.04%, 4=0.37%, 10=0.75%, 20=19.23%, 50=79.61% 00:16:33.764 cpu : usr=2.70%, sys=9.29%, ctx=780, majf=0, minf=9 00:16:33.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:33.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.764 issued rwts: total=2560,3066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.764 job2: (groupid=0, jobs=1): err= 0: pid=87356: Mon Dec 16 10:03:32 2024 00:16:33.764 read: IOPS=4596, BW=18.0MiB/s (18.8MB/s)(18.1MiB/1006msec) 00:16:33.764 slat (usec): min=8, max=3317, avg=96.17, stdev=421.61 00:16:33.764 clat (usec): min=4771, max=18882, avg=12621.03, stdev=1036.44 00:16:33.764 lat (usec): min=6157, max=18892, avg=12717.21, stdev=963.10 00:16:33.764 clat percentiles (usec): 00:16:33.764 | 1.00th=[10028], 5.00th=[10683], 10.00th=[11076], 20.00th=[12125], 00:16:33.764 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:16:33.764 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[14091], 00:16:33.764 | 99.00th=[14746], 99.50th=[15008], 99.90th=[16909], 99.95th=[18744], 00:16:33.764 | 99.99th=[19006] 00:16:33.764 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:16:33.764 slat (usec): min=10, max=3494, avg=101.03, stdev=382.59 00:16:33.764 clat (usec): min=7739, max=26955, avg=13412.98, stdev=2381.48 00:16:33.764 lat (usec): min=8264, max=26973, avg=13514.02, stdev=2383.30 00:16:33.764 clat percentiles (usec): 00:16:33.764 | 1.00th=[10421], 5.00th=[10814], 10.00th=[11207], 20.00th=[11600], 00:16:33.764 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:16:33.765 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14484], 95.00th=[19268], 00:16:33.765 | 99.00th=[22152], 99.50th=[22676], 99.90th=[26346], 99.95th=[26870], 00:16:33.765 | 99.99th=[26870] 00:16:33.765 bw ( KiB/s): min=19592, max=20480, per=29.81%, avg=20036.00, stdev=627.91, samples=2 00:16:33.765 iops : min= 4898, max= 5120, avg=5009.00, stdev=156.98, samples=2 00:16:33.765 lat (msec) : 10=0.69%, 20=96.80%, 50=2.51% 00:16:33.765 cpu : usr=5.07%, sys=13.13%, ctx=811, majf=0, minf=11 00:16:33.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:33.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.765 issued rwts: total=4624,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.765 job3: (groupid=0, jobs=1): err= 0: pid=87357: Mon Dec 16 10:03:32 2024 00:16:33.765 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:16:33.765 slat (usec): min=3, max=7683, avg=178.49, stdev=726.15 00:16:33.765 clat (usec): min=17392, max=31764, avg=23080.77, stdev=2273.26 00:16:33.765 lat (usec): min=17478, max=31787, avg=23259.26, stdev=2219.64 00:16:33.765 clat percentiles (usec): 00:16:33.765 | 1.00th=[18482], 5.00th=[19268], 10.00th=[20055], 20.00th=[21365], 00:16:33.765 | 30.00th=[22152], 40.00th=[22676], 50.00th=[23200], 60.00th=[23462], 00:16:33.765 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25560], 95.00th=[26870], 00:16:33.765 | 99.00th=[30278], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:16:33.765 | 99.99th=[31851] 00:16:33.765 write: IOPS=2890, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1005msec); 0 zone resets 00:16:33.765 slat (usec): min=11, max=6479, avg=178.82, stdev=711.62 00:16:33.765 clat (usec): min=3943, max=29337, avg=23082.42, stdev=3079.40 00:16:33.765 lat (usec): min=4661, max=29367, avg=23261.24, stdev=3038.96 00:16:33.765 clat percentiles (usec): 00:16:33.765 | 1.00th=[ 9372], 5.00th=[18744], 10.00th=[19530], 20.00th=[20579], 00:16:33.765 | 30.00th=[21890], 40.00th=[23462], 50.00th=[23987], 60.00th=[24511], 00:16:33.765 | 70.00th=[24773], 80.00th=[25297], 90.00th=[25822], 95.00th=[26084], 00:16:33.765 | 99.00th=[27395], 99.50th=[27919], 99.90th=[28967], 99.95th=[28967], 00:16:33.765 | 99.99th=[29230] 00:16:33.765 bw ( KiB/s): min= 9936, max=12312, per=16.55%, avg=11124.00, stdev=1680.09, samples=2 00:16:33.765 iops : min= 2484, max= 3078, avg=2781.00, stdev=420.02, samples=2 00:16:33.765 lat (msec) : 4=0.02%, 10=0.59%, 20=11.69%, 50=87.70% 00:16:33.765 cpu : usr=2.09%, sys=9.76%, ctx=757, majf=0, minf=17 00:16:33.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:33.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.765 issued rwts: total=2560,2905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.765 00:16:33.765 Run status group 0 (all jobs): 00:16:33.765 READ: bw=59.7MiB/s (62.6MB/s), 9.95MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=60.1MiB (63.0MB), run=1002-1006msec 00:16:33.765 WRITE: bw=65.6MiB/s (68.8MB/s), 11.3MiB/s-22.6MiB/s (11.8MB/s-23.7MB/s), io=66.0MiB (69.2MB), run=1002-1006msec 00:16:33.765 00:16:33.765 Disk stats (read/write): 00:16:33.765 nvme0n1: ios=5170/5311, merge=0/0, ticks=51692/50519, in_queue=102211, util=89.28% 00:16:33.765 nvme0n2: ios=2236/2560, merge=0/0, ticks=11617/12835, in_queue=24452, util=89.40% 00:16:33.765 nvme0n3: ios=4128/4608, merge=0/0, ticks=12114/12714, in_queue=24828, util=90.08% 00:16:33.765 nvme0n4: ios=2197/2560, merge=0/0, ticks=11684/12898, in_queue=24582, util=89.31% 00:16:33.765 10:03:32 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:33.765 [global] 00:16:33.765 thread=1 00:16:33.765 invalidate=1 00:16:33.765 rw=randwrite 00:16:33.765 time_based=1 00:16:33.765 runtime=1 00:16:33.765 ioengine=libaio 00:16:33.765 direct=1 00:16:33.765 bs=4096 00:16:33.765 iodepth=128 00:16:33.765 norandommap=0 00:16:33.765 numjobs=1 00:16:33.765 00:16:33.765 verify_dump=1 00:16:33.765 verify_backlog=512 00:16:33.765 verify_state_save=0 00:16:33.765 do_verify=1 00:16:33.765 verify=crc32c-intel 00:16:33.765 [job0] 00:16:33.765 filename=/dev/nvme0n1 00:16:33.765 [job1] 00:16:33.765 filename=/dev/nvme0n2 00:16:33.765 [job2] 00:16:33.765 filename=/dev/nvme0n3 00:16:33.765 [job3] 00:16:33.765 filename=/dev/nvme0n4 00:16:33.765 Could not set queue depth (nvme0n1) 00:16:33.765 Could not set queue depth (nvme0n2) 00:16:33.765 Could not set queue depth (nvme0n3) 00:16:33.765 Could not set queue depth (nvme0n4) 00:16:33.765 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:33.765 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:33.765 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:33.765 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:33.765 fio-3.35 00:16:33.765 Starting 4 threads 00:16:35.142 00:16:35.142 job0: (groupid=0, jobs=1): err= 0: pid=87410: Mon Dec 16 10:03:33 2024 00:16:35.142 read: IOPS=2729, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1011msec) 00:16:35.142 slat (usec): min=3, max=21253, avg=157.57, stdev=986.85 00:16:35.142 clat (usec): min=5136, max=51467, avg=19937.87, stdev=7856.08 00:16:35.142 lat (usec): min=5944, max=51499, avg=20095.44, stdev=7915.14 00:16:35.142 clat percentiles (usec): 00:16:35.142 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10814], 20.00th=[11600], 00:16:35.142 | 30.00th=[14615], 40.00th=[16909], 50.00th=[17957], 60.00th=[21365], 00:16:35.142 | 70.00th=[24249], 80.00th=[26608], 90.00th=[30540], 95.00th=[33817], 00:16:35.142 | 99.00th=[41157], 99.50th=[41157], 99.90th=[49546], 99.95th=[49546], 00:16:35.142 | 99.99th=[51643] 00:16:35.142 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:16:35.142 slat (usec): min=4, max=13177, avg=177.76, stdev=805.69 00:16:35.142 clat (usec): min=5057, max=47071, avg=23764.57, stdev=9014.59 00:16:35.142 lat (usec): min=5075, max=47081, avg=23942.32, stdev=9079.83 00:16:35.142 clat percentiles (usec): 00:16:35.142 | 1.00th=[ 6849], 5.00th=[ 9765], 10.00th=[11469], 20.00th=[15664], 00:16:35.142 | 30.00th=[18744], 40.00th=[21627], 50.00th=[23725], 60.00th=[25035], 00:16:35.142 | 70.00th=[27657], 80.00th=[31065], 90.00th=[37487], 95.00th=[41157], 00:16:35.142 | 99.00th=[45876], 99.50th=[46400], 99.90th=[46924], 99.95th=[46924], 00:16:35.142 | 99.99th=[46924] 00:16:35.142 bw ( KiB/s): min=11600, max=13002, per=19.91%, avg=12301.00, stdev=991.36, samples=2 00:16:35.142 iops : min= 2900, max= 3250, avg=3075.00, stdev=247.49, samples=2 00:16:35.142 lat (msec) : 10=5.40%, 20=38.79%, 50=55.80%, 100=0.02% 00:16:35.142 cpu : usr=2.57%, sys=8.02%, ctx=537, majf=0, minf=5 00:16:35.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:16:35.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.142 issued rwts: total=2760,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.142 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.142 job1: (groupid=0, jobs=1): err= 0: pid=87411: Mon Dec 16 10:03:33 2024 00:16:35.142 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:16:35.142 slat (usec): min=3, max=15646, avg=117.41, stdev=717.60 00:16:35.142 clat (usec): min=6881, max=45076, avg=15870.27, stdev=8328.89 00:16:35.142 lat (usec): min=6894, max=45980, avg=15987.68, stdev=8402.38 00:16:35.142 clat percentiles (usec): 00:16:35.142 | 1.00th=[ 7111], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10290], 00:16:35.142 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10945], 60.00th=[11600], 00:16:35.142 | 70.00th=[16319], 80.00th=[25822], 90.00th=[29230], 95.00th=[32637], 00:16:35.142 | 99.00th=[38011], 99.50th=[39584], 99.90th=[41157], 99.95th=[41157], 00:16:35.142 | 99.99th=[44827] 00:16:35.142 write: IOPS=4383, BW=17.1MiB/s (18.0MB/s)(17.3MiB/1011msec); 0 zone resets 00:16:35.142 slat (usec): min=5, max=9464, avg=110.13, stdev=594.16 00:16:35.142 clat (usec): min=5938, max=45083, avg=14172.86, stdev=6703.84 00:16:35.142 lat (usec): min=5963, max=45102, avg=14282.99, stdev=6745.27 00:16:35.142 clat percentiles (usec): 00:16:35.142 | 1.00th=[ 6783], 5.00th=[ 7767], 10.00th=[ 9765], 20.00th=[10290], 00:16:35.142 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11600], 00:16:35.142 | 70.00th=[13698], 80.00th=[19530], 90.00th=[24511], 95.00th=[27395], 00:16:35.142 | 99.00th=[39060], 99.50th=[40109], 99.90th=[44827], 99.95th=[44827], 00:16:35.142 | 99.99th=[44827] 00:16:35.142 bw ( KiB/s): min= 9864, max=24625, per=27.91%, avg=17244.50, stdev=10437.60, samples=2 00:16:35.142 iops : min= 2466, max= 6156, avg=4311.00, stdev=2609.22, samples=2 00:16:35.142 lat (msec) : 10=10.11%, 20=66.30%, 50=23.59% 00:16:35.142 cpu : usr=3.86%, sys=10.89%, ctx=524, majf=0, minf=6 00:16:35.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:35.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.142 issued rwts: total=4096,4432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.142 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.142 job2: (groupid=0, jobs=1): err= 0: pid=87412: Mon Dec 16 10:03:33 2024 00:16:35.142 read: IOPS=5388, BW=21.0MiB/s (22.1MB/s)(21.1MiB/1001msec) 00:16:35.142 slat (usec): min=5, max=2810, avg=85.57, stdev=371.35 00:16:35.142 clat (usec): min=748, max=14989, avg=11391.53, stdev=1681.24 00:16:35.142 lat (usec): min=758, max=15005, avg=11477.10, stdev=1655.40 00:16:35.142 clat percentiles (usec): 00:16:35.142 | 1.00th=[ 5604], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10159], 00:16:35.142 | 30.00th=[10552], 40.00th=[10683], 50.00th=[11076], 60.00th=[11863], 00:16:35.142 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13435], 95.00th=[13698], 00:16:35.142 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14484], 99.95th=[15008], 00:16:35.142 | 99.99th=[15008] 00:16:35.142 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:16:35.142 slat (usec): min=10, max=4298, avg=88.12, stdev=362.04 00:16:35.142 clat (usec): min=7879, max=15058, avg=11539.39, stdev=1581.89 00:16:35.142 lat (usec): min=7897, max=15076, avg=11627.52, stdev=1570.64 00:16:35.142 clat percentiles (usec): 00:16:35.142 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[10290], 00:16:35.142 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11600], 00:16:35.142 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13566], 95.00th=[13829], 00:16:35.142 | 99.00th=[14222], 99.50th=[14615], 99.90th=[14877], 99.95th=[15008], 00:16:35.142 | 99.99th=[15008] 00:16:35.142 bw ( KiB/s): min=20521, max=24576, per=36.50%, avg=22548.50, stdev=2867.32, samples=2 00:16:35.142 iops : min= 5130, max= 6144, avg=5637.00, stdev=717.01, samples=2 00:16:35.142 lat (usec) : 750=0.01%, 1000=0.05% 00:16:35.142 lat (msec) : 4=0.29%, 10=14.47%, 20=85.19% 00:16:35.142 cpu : usr=5.20%, sys=14.20%, ctx=797, majf=0, minf=5 00:16:35.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:35.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.143 issued rwts: total=5394,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.143 job3: (groupid=0, jobs=1): err= 0: pid=87413: Mon Dec 16 10:03:33 2024 00:16:35.143 read: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec) 00:16:35.143 slat (usec): min=6, max=16909, avg=185.52, stdev=1192.83 00:16:35.143 clat (usec): min=10870, max=57381, avg=21022.51, stdev=8770.33 00:16:35.143 lat (usec): min=10885, max=57396, avg=21208.03, stdev=8897.67 00:16:35.143 clat percentiles (usec): 00:16:35.143 | 1.00th=[11207], 5.00th=[13960], 10.00th=[14353], 20.00th=[15795], 00:16:35.143 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16450], 60.00th=[17957], 00:16:35.143 | 70.00th=[21627], 80.00th=[27395], 90.00th=[31589], 95.00th=[38011], 00:16:35.143 | 99.00th=[54789], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:16:35.143 | 99.99th=[57410] 00:16:35.143 write: IOPS=2452, BW=9808KiB/s (10.0MB/s)(9916KiB/1011msec); 0 zone resets 00:16:35.143 slat (usec): min=11, max=25210, avg=241.88, stdev=1160.66 00:16:35.143 clat (usec): min=9736, max=68603, avg=33970.88, stdev=12364.82 00:16:35.143 lat (usec): min=10819, max=68659, avg=34212.77, stdev=12430.50 00:16:35.143 clat percentiles (usec): 00:16:35.143 | 1.00th=[13304], 5.00th=[16712], 10.00th=[20579], 20.00th=[23200], 00:16:35.143 | 30.00th=[24773], 40.00th=[27919], 50.00th=[31851], 60.00th=[34341], 00:16:35.143 | 70.00th=[40109], 80.00th=[44827], 90.00th=[53216], 95.00th=[58459], 00:16:35.143 | 99.00th=[60031], 99.50th=[61080], 99.90th=[64226], 99.95th=[64226], 00:16:35.143 | 99.99th=[68682] 00:16:35.143 bw ( KiB/s): min= 8520, max=10296, per=15.23%, avg=9408.00, stdev=1255.82, samples=2 00:16:35.143 iops : min= 2130, max= 2574, avg=2352.00, stdev=313.96, samples=2 00:16:35.143 lat (msec) : 10=0.02%, 20=34.77%, 50=56.11%, 100=9.10% 00:16:35.143 cpu : usr=2.18%, sys=7.72%, ctx=299, majf=0, minf=11 00:16:35.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:35.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.143 issued rwts: total=2048,2479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.143 00:16:35.143 Run status group 0 (all jobs): 00:16:35.143 READ: bw=55.2MiB/s (57.9MB/s), 8103KiB/s-21.0MiB/s (8297kB/s-22.1MB/s), io=55.9MiB (58.6MB), run=1001-1011msec 00:16:35.143 WRITE: bw=60.3MiB/s (63.3MB/s), 9808KiB/s-22.0MiB/s (10.0MB/s-23.0MB/s), io=61.0MiB (64.0MB), run=1001-1011msec 00:16:35.143 00:16:35.143 Disk stats (read/write): 00:16:35.143 nvme0n1: ios=2529/2560, merge=0/0, ticks=34192/47836, in_queue=82028, util=87.78% 00:16:35.143 nvme0n2: ios=3692/4096, merge=0/0, ticks=24202/24141, in_queue=48343, util=88.27% 00:16:35.143 nvme0n3: ios=4608/4642, merge=0/0, ticks=12530/11674, in_queue=24204, util=89.00% 00:16:35.143 nvme0n4: ios=1723/2048, merge=0/0, ticks=17599/33298, in_queue=50897, util=89.77% 00:16:35.143 10:03:33 -- target/fio.sh@55 -- # sync 00:16:35.143 10:03:33 -- target/fio.sh@59 -- # fio_pid=87426 00:16:35.143 10:03:33 -- target/fio.sh@61 -- # sleep 3 00:16:35.143 10:03:33 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:35.143 [global] 00:16:35.143 thread=1 00:16:35.143 invalidate=1 00:16:35.143 rw=read 00:16:35.143 time_based=1 00:16:35.143 runtime=10 00:16:35.143 ioengine=libaio 00:16:35.143 direct=1 00:16:35.143 bs=4096 00:16:35.143 iodepth=1 00:16:35.143 norandommap=1 00:16:35.143 numjobs=1 00:16:35.143 00:16:35.143 [job0] 00:16:35.143 filename=/dev/nvme0n1 00:16:35.143 [job1] 00:16:35.143 filename=/dev/nvme0n2 00:16:35.143 [job2] 00:16:35.143 filename=/dev/nvme0n3 00:16:35.143 [job3] 00:16:35.143 filename=/dev/nvme0n4 00:16:35.143 Could not set queue depth (nvme0n1) 00:16:35.143 Could not set queue depth (nvme0n2) 00:16:35.143 Could not set queue depth (nvme0n3) 00:16:35.143 Could not set queue depth (nvme0n4) 00:16:35.402 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:35.402 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:35.402 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:35.402 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:35.402 fio-3.35 00:16:35.402 Starting 4 threads 00:16:38.687 10:03:36 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:38.687 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46243840, buflen=4096 00:16:38.687 fio: pid=87469, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:38.687 10:03:36 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:38.687 fio: pid=87468, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:38.687 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=69640192, buflen=4096 00:16:38.687 10:03:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:38.687 10:03:37 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:38.945 fio: pid=87466, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:38.945 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10502144, buflen=4096 00:16:38.945 10:03:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:38.945 10:03:37 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:39.204 fio: pid=87467, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:39.204 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=61566976, buflen=4096 00:16:39.204 00:16:39.204 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87466: Mon Dec 16 10:03:37 2024 00:16:39.204 read: IOPS=5459, BW=21.3MiB/s (22.4MB/s)(74.0MiB/3471msec) 00:16:39.204 slat (usec): min=10, max=8833, avg=15.82, stdev=109.60 00:16:39.204 clat (usec): min=117, max=8059, avg=166.11, stdev=86.10 00:16:39.204 lat (usec): min=129, max=9246, avg=181.93, stdev=140.60 00:16:39.204 clat percentiles (usec): 00:16:39.204 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:16:39.204 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:16:39.204 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 239], 95.00th=[ 262], 00:16:39.204 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 416], 99.95th=[ 676], 00:16:39.204 | 99.99th=[ 4146] 00:16:39.204 bw ( KiB/s): min=17424, max=23992, per=33.35%, avg=22251.67, stdev=2671.16, samples=6 00:16:39.204 iops : min= 4356, max= 5998, avg=5562.83, stdev=667.85, samples=6 00:16:39.204 lat (usec) : 250=92.52%, 500=7.40%, 750=0.03% 00:16:39.204 lat (msec) : 2=0.02%, 4=0.02%, 10=0.01% 00:16:39.204 cpu : usr=1.53%, sys=6.31%, ctx=18979, majf=0, minf=1 00:16:39.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:39.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.204 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.204 issued rwts: total=18949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:39.204 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87467: Mon Dec 16 10:03:37 2024 00:16:39.204 read: IOPS=4026, BW=15.7MiB/s (16.5MB/s)(58.7MiB/3733msec) 00:16:39.204 slat (usec): min=10, max=9445, avg=17.15, stdev=169.33 00:16:39.204 clat (usec): min=95, max=62436, avg=229.92, stdev=517.34 00:16:39.204 lat (usec): min=125, max=62457, avg=247.07, stdev=544.04 00:16:39.204 clat percentiles (usec): 00:16:39.204 | 1.00th=[ 121], 5.00th=[ 127], 10.00th=[ 131], 20.00th=[ 143], 00:16:39.204 | 30.00th=[ 210], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 255], 00:16:39.204 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:16:39.204 | 99.00th=[ 310], 99.50th=[ 383], 99.90th=[ 1012], 99.95th=[ 1385], 00:16:39.204 | 99.99th=[ 7832] 00:16:39.204 bw ( KiB/s): min=12536, max=23249, per=23.50%, avg=15681.43, stdev=3443.89, samples=7 00:16:39.204 iops : min= 3134, max= 5812, avg=3920.29, stdev=860.89, samples=7 00:16:39.204 lat (usec) : 100=0.01%, 250=53.07%, 500=46.62%, 750=0.14%, 1000=0.05% 00:16:39.204 lat (msec) : 2=0.07%, 4=0.02%, 10=0.01%, 100=0.01% 00:16:39.204 cpu : usr=1.10%, sys=4.80%, ctx=15053, majf=0, minf=2 00:16:39.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:39.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.204 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.204 issued rwts: total=15032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:39.204 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87468: Mon Dec 16 10:03:37 2024 00:16:39.204 read: IOPS=5257, BW=20.5MiB/s (21.5MB/s)(66.4MiB/3234msec) 00:16:39.204 slat (usec): min=9, max=8789, avg=15.32, stdev=87.54 00:16:39.204 clat (usec): min=134, max=3680, avg=173.65, stdev=56.17 00:16:39.204 lat (usec): min=147, max=9037, avg=188.97, stdev=104.44 00:16:39.204 clat percentiles (usec): 00:16:39.204 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:16:39.204 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 172], 00:16:39.204 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 241], 00:16:39.204 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 355], 99.95th=[ 553], 00:16:39.204 | 99.99th=[ 3228] 00:16:39.205 bw ( KiB/s): min=19568, max=22288, per=32.34%, avg=21578.67, stdev=1070.85, samples=6 00:16:39.205 iops : min= 4892, max= 5572, avg=5394.67, stdev=267.71, samples=6 00:16:39.205 lat (usec) : 250=96.29%, 500=3.65%, 750=0.02% 00:16:39.205 lat (msec) : 2=0.01%, 4=0.02% 00:16:39.205 cpu : usr=1.36%, sys=6.25%, ctx=17024, majf=0, minf=2 00:16:39.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:39.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.205 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.205 issued rwts: total=17003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:39.205 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87469: Mon Dec 16 10:03:37 2024 00:16:39.205 read: IOPS=3841, BW=15.0MiB/s (15.7MB/s)(44.1MiB/2939msec) 00:16:39.205 slat (nsec): min=8528, max=73974, avg=13921.25, stdev=4203.62 00:16:39.205 clat (usec): min=134, max=7876, avg=244.96, stdev=92.15 00:16:39.205 lat (usec): min=148, max=7893, avg=258.88, stdev=91.61 00:16:39.205 clat percentiles (usec): 00:16:39.205 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 167], 20.00th=[ 233], 00:16:39.205 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 260], 00:16:39.205 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:16:39.205 | 99.00th=[ 306], 99.50th=[ 330], 99.90th=[ 529], 99.95th=[ 1532], 00:16:39.205 | 99.99th=[ 2835] 00:16:39.205 bw ( KiB/s): min=14632, max=18880, per=23.33%, avg=15569.60, stdev=1852.50, samples=5 00:16:39.205 iops : min= 3658, max= 4720, avg=3892.40, stdev=463.13, samples=5 00:16:39.205 lat (usec) : 250=45.66%, 500=54.21%, 750=0.04%, 1000=0.02% 00:16:39.205 lat (msec) : 2=0.04%, 4=0.02%, 10=0.01% 00:16:39.205 cpu : usr=1.09%, sys=4.59%, ctx=11293, majf=0, minf=2 00:16:39.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:39.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.205 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.205 issued rwts: total=11291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:39.205 00:16:39.205 Run status group 0 (all jobs): 00:16:39.205 READ: bw=65.2MiB/s (68.3MB/s), 15.0MiB/s-21.3MiB/s (15.7MB/s-22.4MB/s), io=243MiB (255MB), run=2939-3733msec 00:16:39.205 00:16:39.205 Disk stats (read/write): 00:16:39.205 nvme0n1: ios=18304/0, merge=0/0, ticks=3176/0, in_queue=3176, util=95.51% 00:16:39.205 nvme0n2: ios=14381/0, merge=0/0, ticks=3397/0, in_queue=3397, util=95.64% 00:16:39.205 nvme0n3: ios=16606/0, merge=0/0, ticks=2981/0, in_queue=2981, util=96.43% 00:16:39.205 nvme0n4: ios=11046/0, merge=0/0, ticks=2736/0, in_queue=2736, util=96.49% 00:16:39.205 10:03:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:39.205 10:03:37 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:39.464 10:03:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:39.464 10:03:37 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:39.722 10:03:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:39.722 10:03:38 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:39.980 10:03:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:39.980 10:03:38 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:40.239 10:03:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:40.239 10:03:38 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:40.498 10:03:39 -- target/fio.sh@69 -- # fio_status=0 00:16:40.498 10:03:39 -- target/fio.sh@70 -- # wait 87426 00:16:40.498 10:03:39 -- target/fio.sh@70 -- # fio_status=4 00:16:40.498 10:03:39 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:40.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.498 10:03:39 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:40.498 10:03:39 -- common/autotest_common.sh@1208 -- # local i=0 00:16:40.498 10:03:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:40.498 10:03:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.498 10:03:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:40.498 10:03:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.498 nvmf hotplug test: fio failed as expected 00:16:40.498 10:03:39 -- common/autotest_common.sh@1220 -- # return 0 00:16:40.498 10:03:39 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:40.498 10:03:39 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:40.498 10:03:39 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.757 10:03:39 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:40.757 10:03:39 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:40.757 10:03:39 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:40.757 10:03:39 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:40.757 10:03:39 -- target/fio.sh@91 -- # nvmftestfini 00:16:40.757 10:03:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:40.757 10:03:39 -- nvmf/common.sh@116 -- # sync 00:16:40.757 10:03:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:40.757 10:03:39 -- nvmf/common.sh@119 -- # set +e 00:16:40.757 10:03:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:40.757 10:03:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:40.757 rmmod nvme_tcp 00:16:40.757 rmmod nvme_fabrics 00:16:40.757 rmmod nvme_keyring 00:16:40.757 10:03:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:40.757 10:03:39 -- nvmf/common.sh@123 -- # set -e 00:16:40.757 10:03:39 -- nvmf/common.sh@124 -- # return 0 00:16:40.757 10:03:39 -- nvmf/common.sh@477 -- # '[' -n 86936 ']' 00:16:40.757 10:03:39 -- nvmf/common.sh@478 -- # killprocess 86936 00:16:40.757 10:03:39 -- common/autotest_common.sh@936 -- # '[' -z 86936 ']' 00:16:40.757 10:03:39 -- common/autotest_common.sh@940 -- # kill -0 86936 00:16:40.757 10:03:39 -- common/autotest_common.sh@941 -- # uname 00:16:40.757 10:03:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.757 10:03:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86936 00:16:41.015 killing process with pid 86936 00:16:41.015 10:03:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:41.015 10:03:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:41.015 10:03:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86936' 00:16:41.015 10:03:39 -- common/autotest_common.sh@955 -- # kill 86936 00:16:41.015 10:03:39 -- common/autotest_common.sh@960 -- # wait 86936 00:16:41.015 10:03:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:41.015 10:03:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:41.015 10:03:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:41.015 10:03:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.015 10:03:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:41.015 10:03:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.015 10:03:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.015 10:03:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.015 10:03:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:41.015 00:16:41.015 real 0m19.521s 00:16:41.015 user 1m13.724s 00:16:41.015 sys 0m9.524s 00:16:41.015 10:03:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:41.015 10:03:39 -- common/autotest_common.sh@10 -- # set +x 00:16:41.015 ************************************ 00:16:41.015 END TEST nvmf_fio_target 00:16:41.016 ************************************ 00:16:41.275 10:03:39 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:41.275 10:03:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:41.275 10:03:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:41.275 10:03:39 -- common/autotest_common.sh@10 -- # set +x 00:16:41.275 ************************************ 00:16:41.275 START TEST nvmf_bdevio 00:16:41.275 ************************************ 00:16:41.275 10:03:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:41.275 * Looking for test storage... 00:16:41.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:41.275 10:03:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:41.275 10:03:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:41.275 10:03:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:41.275 10:03:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:41.275 10:03:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:41.275 10:03:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:41.275 10:03:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:41.275 10:03:39 -- scripts/common.sh@335 -- # IFS=.-: 00:16:41.275 10:03:39 -- scripts/common.sh@335 -- # read -ra ver1 00:16:41.275 10:03:39 -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.275 10:03:39 -- scripts/common.sh@336 -- # read -ra ver2 00:16:41.275 10:03:39 -- scripts/common.sh@337 -- # local 'op=<' 00:16:41.275 10:03:39 -- scripts/common.sh@339 -- # ver1_l=2 00:16:41.275 10:03:39 -- scripts/common.sh@340 -- # ver2_l=1 00:16:41.275 10:03:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:41.275 10:03:39 -- scripts/common.sh@343 -- # case "$op" in 00:16:41.275 10:03:39 -- scripts/common.sh@344 -- # : 1 00:16:41.275 10:03:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:41.275 10:03:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.275 10:03:39 -- scripts/common.sh@364 -- # decimal 1 00:16:41.275 10:03:39 -- scripts/common.sh@352 -- # local d=1 00:16:41.275 10:03:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.275 10:03:39 -- scripts/common.sh@354 -- # echo 1 00:16:41.275 10:03:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:41.275 10:03:39 -- scripts/common.sh@365 -- # decimal 2 00:16:41.275 10:03:39 -- scripts/common.sh@352 -- # local d=2 00:16:41.275 10:03:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.275 10:03:39 -- scripts/common.sh@354 -- # echo 2 00:16:41.275 10:03:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:41.275 10:03:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:41.275 10:03:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:41.275 10:03:39 -- scripts/common.sh@367 -- # return 0 00:16:41.275 10:03:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.275 10:03:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:41.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.275 --rc genhtml_branch_coverage=1 00:16:41.275 --rc genhtml_function_coverage=1 00:16:41.275 --rc genhtml_legend=1 00:16:41.275 --rc geninfo_all_blocks=1 00:16:41.275 --rc geninfo_unexecuted_blocks=1 00:16:41.275 00:16:41.275 ' 00:16:41.275 10:03:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:41.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.275 --rc genhtml_branch_coverage=1 00:16:41.275 --rc genhtml_function_coverage=1 00:16:41.275 --rc genhtml_legend=1 00:16:41.275 --rc geninfo_all_blocks=1 00:16:41.275 --rc geninfo_unexecuted_blocks=1 00:16:41.275 00:16:41.275 ' 00:16:41.275 10:03:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:41.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.275 --rc genhtml_branch_coverage=1 00:16:41.275 --rc genhtml_function_coverage=1 00:16:41.275 --rc genhtml_legend=1 00:16:41.275 --rc geninfo_all_blocks=1 00:16:41.275 --rc geninfo_unexecuted_blocks=1 00:16:41.275 00:16:41.275 ' 00:16:41.275 10:03:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:41.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.275 --rc genhtml_branch_coverage=1 00:16:41.275 --rc genhtml_function_coverage=1 00:16:41.275 --rc genhtml_legend=1 00:16:41.275 --rc geninfo_all_blocks=1 00:16:41.275 --rc geninfo_unexecuted_blocks=1 00:16:41.275 00:16:41.275 ' 00:16:41.275 10:03:39 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:41.275 10:03:39 -- nvmf/common.sh@7 -- # uname -s 00:16:41.275 10:03:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.275 10:03:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.275 10:03:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.275 10:03:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.275 10:03:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.275 10:03:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.275 10:03:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.275 10:03:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.275 10:03:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.275 10:03:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.275 10:03:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:16:41.275 10:03:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:16:41.275 10:03:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.275 10:03:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.275 10:03:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:41.275 10:03:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:41.275 10:03:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.275 10:03:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.275 10:03:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.275 10:03:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.275 10:03:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.275 10:03:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.275 10:03:39 -- paths/export.sh@5 -- # export PATH 00:16:41.275 10:03:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.275 10:03:39 -- nvmf/common.sh@46 -- # : 0 00:16:41.275 10:03:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:41.275 10:03:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:41.275 10:03:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:41.275 10:03:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.275 10:03:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.275 10:03:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:41.275 10:03:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:41.275 10:03:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:41.275 10:03:39 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:41.275 10:03:39 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:41.275 10:03:39 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:41.275 10:03:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:41.275 10:03:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.275 10:03:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:41.275 10:03:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:41.275 10:03:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:41.275 10:03:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.275 10:03:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.275 10:03:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.275 10:03:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:41.275 10:03:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:41.275 10:03:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:41.275 10:03:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:41.275 10:03:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:41.275 10:03:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:41.276 10:03:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:41.276 10:03:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:41.276 10:03:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:41.276 10:03:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:41.276 10:03:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:41.276 10:03:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:41.276 10:03:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:41.276 10:03:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:41.276 10:03:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:41.276 10:03:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:41.276 10:03:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:41.276 10:03:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:41.276 10:03:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:41.276 10:03:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:41.276 Cannot find device "nvmf_tgt_br" 00:16:41.276 10:03:39 -- nvmf/common.sh@154 -- # true 00:16:41.276 10:03:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:41.276 Cannot find device "nvmf_tgt_br2" 00:16:41.276 10:03:39 -- nvmf/common.sh@155 -- # true 00:16:41.276 10:03:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:41.276 10:03:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:41.534 Cannot find device "nvmf_tgt_br" 00:16:41.534 10:03:39 -- nvmf/common.sh@157 -- # true 00:16:41.534 10:03:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:41.534 Cannot find device "nvmf_tgt_br2" 00:16:41.534 10:03:39 -- nvmf/common.sh@158 -- # true 00:16:41.534 10:03:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:41.534 10:03:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:41.534 10:03:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:41.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.534 10:03:39 -- nvmf/common.sh@161 -- # true 00:16:41.534 10:03:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:41.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:41.534 10:03:39 -- nvmf/common.sh@162 -- # true 00:16:41.534 10:03:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:41.534 10:03:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:41.534 10:03:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:41.534 10:03:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:41.534 10:03:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:41.534 10:03:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:41.534 10:03:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:41.534 10:03:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:41.534 10:03:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:41.534 10:03:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:41.534 10:03:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:41.534 10:03:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:41.534 10:03:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:41.534 10:03:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:41.534 10:03:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:41.534 10:03:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:41.534 10:03:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:41.534 10:03:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:41.793 10:03:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:41.793 10:03:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:41.793 10:03:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:41.793 10:03:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:41.793 10:03:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:41.793 10:03:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:41.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:16:41.793 00:16:41.793 --- 10.0.0.2 ping statistics --- 00:16:41.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.794 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:16:41.794 10:03:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:41.794 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:41.794 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:16:41.794 00:16:41.794 --- 10.0.0.3 ping statistics --- 00:16:41.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.794 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:41.794 10:03:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:41.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:16:41.794 00:16:41.794 --- 10.0.0.1 ping statistics --- 00:16:41.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.794 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:41.794 10:03:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.794 10:03:40 -- nvmf/common.sh@421 -- # return 0 00:16:41.794 10:03:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:41.794 10:03:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.794 10:03:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:41.794 10:03:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:41.794 10:03:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.794 10:03:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:41.794 10:03:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:41.794 10:03:40 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:41.794 10:03:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:41.794 10:03:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:41.794 10:03:40 -- common/autotest_common.sh@10 -- # set +x 00:16:41.794 10:03:40 -- nvmf/common.sh@469 -- # nvmfpid=87805 00:16:41.794 10:03:40 -- nvmf/common.sh@470 -- # waitforlisten 87805 00:16:41.794 10:03:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:41.794 10:03:40 -- common/autotest_common.sh@829 -- # '[' -z 87805 ']' 00:16:41.794 10:03:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.794 10:03:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.794 10:03:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.794 10:03:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.794 10:03:40 -- common/autotest_common.sh@10 -- # set +x 00:16:41.794 [2024-12-16 10:03:40.293276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:41.794 [2024-12-16 10:03:40.293373] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.053 [2024-12-16 10:03:40.435654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:42.053 [2024-12-16 10:03:40.503097] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:42.053 [2024-12-16 10:03:40.503271] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.053 [2024-12-16 10:03:40.503287] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.053 [2024-12-16 10:03:40.503298] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.053 [2024-12-16 10:03:40.503454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:42.053 [2024-12-16 10:03:40.505631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:42.053 [2024-12-16 10:03:40.505760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:42.053 [2024-12-16 10:03:40.505875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:42.620 10:03:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.620 10:03:41 -- common/autotest_common.sh@862 -- # return 0 00:16:42.620 10:03:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:42.620 10:03:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:42.620 10:03:41 -- common/autotest_common.sh@10 -- # set +x 00:16:42.879 10:03:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.879 10:03:41 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:42.879 10:03:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.879 10:03:41 -- common/autotest_common.sh@10 -- # set +x 00:16:42.879 [2024-12-16 10:03:41.269457] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.879 10:03:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.879 10:03:41 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:42.879 10:03:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.879 10:03:41 -- common/autotest_common.sh@10 -- # set +x 00:16:42.879 Malloc0 00:16:42.879 10:03:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.879 10:03:41 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:42.879 10:03:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.879 10:03:41 -- common/autotest_common.sh@10 -- # set +x 00:16:42.879 10:03:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.879 10:03:41 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:42.879 10:03:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.879 10:03:41 -- common/autotest_common.sh@10 -- # set +x 00:16:42.879 10:03:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.879 10:03:41 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.879 10:03:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.879 10:03:41 -- common/autotest_common.sh@10 -- # set +x 00:16:42.879 [2024-12-16 10:03:41.343842] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.879 10:03:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.879 10:03:41 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:42.879 10:03:41 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:42.879 10:03:41 -- nvmf/common.sh@520 -- # config=() 00:16:42.879 10:03:41 -- nvmf/common.sh@520 -- # local subsystem config 00:16:42.879 10:03:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:42.879 10:03:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:42.879 { 00:16:42.879 "params": { 00:16:42.879 "name": "Nvme$subsystem", 00:16:42.879 "trtype": "$TEST_TRANSPORT", 00:16:42.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.879 "adrfam": "ipv4", 00:16:42.879 "trsvcid": "$NVMF_PORT", 00:16:42.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.879 "hdgst": ${hdgst:-false}, 00:16:42.879 "ddgst": ${ddgst:-false} 00:16:42.879 }, 00:16:42.879 "method": "bdev_nvme_attach_controller" 00:16:42.879 } 00:16:42.879 EOF 00:16:42.879 )") 00:16:42.879 10:03:41 -- nvmf/common.sh@542 -- # cat 00:16:42.879 10:03:41 -- nvmf/common.sh@544 -- # jq . 00:16:42.879 10:03:41 -- nvmf/common.sh@545 -- # IFS=, 00:16:42.879 10:03:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:42.879 "params": { 00:16:42.879 "name": "Nvme1", 00:16:42.879 "trtype": "tcp", 00:16:42.879 "traddr": "10.0.0.2", 00:16:42.879 "adrfam": "ipv4", 00:16:42.879 "trsvcid": "4420", 00:16:42.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:42.879 "hdgst": false, 00:16:42.879 "ddgst": false 00:16:42.879 }, 00:16:42.879 "method": "bdev_nvme_attach_controller" 00:16:42.879 }' 00:16:42.879 [2024-12-16 10:03:41.406254] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:42.879 [2024-12-16 10:03:41.406348] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87859 ] 00:16:43.138 [2024-12-16 10:03:41.553023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:43.138 [2024-12-16 10:03:41.615702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.138 [2024-12-16 10:03:41.615843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.138 [2024-12-16 10:03:41.615850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.396 [2024-12-16 10:03:41.791124] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:43.396 [2024-12-16 10:03:41.791173] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:43.396 I/O targets: 00:16:43.396 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:43.396 00:16:43.396 00:16:43.396 CUnit - A unit testing framework for C - Version 2.1-3 00:16:43.396 http://cunit.sourceforge.net/ 00:16:43.396 00:16:43.396 00:16:43.396 Suite: bdevio tests on: Nvme1n1 00:16:43.396 Test: blockdev write read block ...passed 00:16:43.396 Test: blockdev write zeroes read block ...passed 00:16:43.396 Test: blockdev write zeroes read no split ...passed 00:16:43.396 Test: blockdev write zeroes read split ...passed 00:16:43.396 Test: blockdev write zeroes read split partial ...passed 00:16:43.396 Test: blockdev reset ...[2024-12-16 10:03:41.907003] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:43.396 [2024-12-16 10:03:41.907095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544ed0 (9): Bad file descriptor 00:16:43.396 [2024-12-16 10:03:41.920373] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:43.396 passed 00:16:43.396 Test: blockdev write read 8 blocks ...passed 00:16:43.397 Test: blockdev write read size > 128k ...passed 00:16:43.397 Test: blockdev write read invalid size ...passed 00:16:43.397 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:43.397 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:43.397 Test: blockdev write read max offset ...passed 00:16:43.656 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:43.656 Test: blockdev writev readv 8 blocks ...passed 00:16:43.656 Test: blockdev writev readv 30 x 1block ...passed 00:16:43.656 Test: blockdev writev readv block ...passed 00:16:43.656 Test: blockdev writev readv size > 128k ...passed 00:16:43.656 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:43.656 Test: blockdev comparev and writev ...[2024-12-16 10:03:42.091955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.656 [2024-12-16 10:03:42.091994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:43.656 [2024-12-16 10:03:42.092014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.656 [2024-12-16 10:03:42.092025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:43.656 [2024-12-16 10:03:42.092677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.656 [2024-12-16 10:03:42.092734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:43.656 [2024-12-16 10:03:42.092751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.656 [2024-12-16 10:03:42.092761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:43.656 [2024-12-16 10:03:42.093205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.656 [2024-12-16 10:03:42.093232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:43.656 [2024-12-16 10:03:42.093248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.656 [2024-12-16 10:03:42.093259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:43.656 [2024-12-16 10:03:42.093882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.656 [2024-12-16 10:03:42.093909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:43.656 [2024-12-16 10:03:42.093925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:43.656 [2024-12-16 10:03:42.093935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:43.656 passed 00:16:43.656 Test: blockdev nvme passthru rw ...passed 00:16:43.656 Test: blockdev nvme passthru vendor specific ...[2024-12-16 10:03:42.176647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:43.656 [2024-12-16 10:03:42.176675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:43.656 [2024-12-16 10:03:42.176825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:43.656 [2024-12-16 10:03:42.176841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:43.656 [2024-12-16 10:03:42.176953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:43.656 [2024-12-16 10:03:42.176978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:43.656 [2024-12-16 10:03:42.177089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:43.656 [2024-12-16 10:03:42.177113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:43.656 passed 00:16:43.656 Test: blockdev nvme admin passthru ...passed 00:16:43.656 Test: blockdev copy ...passed 00:16:43.656 00:16:43.656 Run Summary: Type Total Ran Passed Failed Inactive 00:16:43.656 suites 1 1 n/a 0 0 00:16:43.656 tests 23 23 23 0 0 00:16:43.656 asserts 152 152 152 0 n/a 00:16:43.656 00:16:43.656 Elapsed time = 0.896 seconds 00:16:43.915 10:03:42 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.915 10:03:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.915 10:03:42 -- common/autotest_common.sh@10 -- # set +x 00:16:43.915 10:03:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.915 10:03:42 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:43.915 10:03:42 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:43.915 10:03:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:43.915 10:03:42 -- nvmf/common.sh@116 -- # sync 00:16:43.915 10:03:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:43.915 10:03:42 -- nvmf/common.sh@119 -- # set +e 00:16:43.915 10:03:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:43.915 10:03:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:43.915 rmmod nvme_tcp 00:16:43.915 rmmod nvme_fabrics 00:16:43.915 rmmod nvme_keyring 00:16:44.174 10:03:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:44.174 10:03:42 -- nvmf/common.sh@123 -- # set -e 00:16:44.174 10:03:42 -- nvmf/common.sh@124 -- # return 0 00:16:44.174 10:03:42 -- nvmf/common.sh@477 -- # '[' -n 87805 ']' 00:16:44.174 10:03:42 -- nvmf/common.sh@478 -- # killprocess 87805 00:16:44.174 10:03:42 -- common/autotest_common.sh@936 -- # '[' -z 87805 ']' 00:16:44.174 10:03:42 -- common/autotest_common.sh@940 -- # kill -0 87805 00:16:44.174 10:03:42 -- common/autotest_common.sh@941 -- # uname 00:16:44.174 10:03:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:44.174 10:03:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87805 00:16:44.174 killing process with pid 87805 00:16:44.174 10:03:42 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:44.174 10:03:42 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:44.174 10:03:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87805' 00:16:44.174 10:03:42 -- common/autotest_common.sh@955 -- # kill 87805 00:16:44.174 10:03:42 -- common/autotest_common.sh@960 -- # wait 87805 00:16:44.174 10:03:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:44.174 10:03:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:44.174 10:03:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:44.174 10:03:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.174 10:03:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:44.174 10:03:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.174 10:03:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.174 10:03:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.433 10:03:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:44.433 00:16:44.433 real 0m3.159s 00:16:44.433 user 0m11.099s 00:16:44.433 sys 0m0.803s 00:16:44.433 10:03:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:44.433 10:03:42 -- common/autotest_common.sh@10 -- # set +x 00:16:44.433 ************************************ 00:16:44.433 END TEST nvmf_bdevio 00:16:44.433 ************************************ 00:16:44.433 10:03:42 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:44.433 10:03:42 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:44.433 10:03:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:44.433 10:03:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:44.433 10:03:42 -- common/autotest_common.sh@10 -- # set +x 00:16:44.433 ************************************ 00:16:44.433 START TEST nvmf_bdevio_no_huge 00:16:44.433 ************************************ 00:16:44.433 10:03:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:44.433 * Looking for test storage... 00:16:44.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:44.433 10:03:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:44.433 10:03:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:44.433 10:03:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:44.433 10:03:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:44.433 10:03:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:44.433 10:03:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:44.433 10:03:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:44.433 10:03:43 -- scripts/common.sh@335 -- # IFS=.-: 00:16:44.433 10:03:43 -- scripts/common.sh@335 -- # read -ra ver1 00:16:44.433 10:03:43 -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.433 10:03:43 -- scripts/common.sh@336 -- # read -ra ver2 00:16:44.433 10:03:43 -- scripts/common.sh@337 -- # local 'op=<' 00:16:44.433 10:03:43 -- scripts/common.sh@339 -- # ver1_l=2 00:16:44.433 10:03:43 -- scripts/common.sh@340 -- # ver2_l=1 00:16:44.433 10:03:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:44.433 10:03:43 -- scripts/common.sh@343 -- # case "$op" in 00:16:44.433 10:03:43 -- scripts/common.sh@344 -- # : 1 00:16:44.433 10:03:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:44.433 10:03:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.433 10:03:43 -- scripts/common.sh@364 -- # decimal 1 00:16:44.433 10:03:43 -- scripts/common.sh@352 -- # local d=1 00:16:44.433 10:03:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.433 10:03:43 -- scripts/common.sh@354 -- # echo 1 00:16:44.433 10:03:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:44.433 10:03:43 -- scripts/common.sh@365 -- # decimal 2 00:16:44.433 10:03:43 -- scripts/common.sh@352 -- # local d=2 00:16:44.433 10:03:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.433 10:03:43 -- scripts/common.sh@354 -- # echo 2 00:16:44.433 10:03:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:44.433 10:03:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:44.433 10:03:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:44.433 10:03:43 -- scripts/common.sh@367 -- # return 0 00:16:44.433 10:03:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.433 10:03:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:44.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.433 --rc genhtml_branch_coverage=1 00:16:44.433 --rc genhtml_function_coverage=1 00:16:44.433 --rc genhtml_legend=1 00:16:44.433 --rc geninfo_all_blocks=1 00:16:44.433 --rc geninfo_unexecuted_blocks=1 00:16:44.433 00:16:44.433 ' 00:16:44.433 10:03:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:44.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.433 --rc genhtml_branch_coverage=1 00:16:44.433 --rc genhtml_function_coverage=1 00:16:44.433 --rc genhtml_legend=1 00:16:44.433 --rc geninfo_all_blocks=1 00:16:44.433 --rc geninfo_unexecuted_blocks=1 00:16:44.433 00:16:44.433 ' 00:16:44.433 10:03:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:44.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.433 --rc genhtml_branch_coverage=1 00:16:44.433 --rc genhtml_function_coverage=1 00:16:44.433 --rc genhtml_legend=1 00:16:44.433 --rc geninfo_all_blocks=1 00:16:44.433 --rc geninfo_unexecuted_blocks=1 00:16:44.433 00:16:44.433 ' 00:16:44.433 10:03:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:44.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.433 --rc genhtml_branch_coverage=1 00:16:44.433 --rc genhtml_function_coverage=1 00:16:44.433 --rc genhtml_legend=1 00:16:44.433 --rc geninfo_all_blocks=1 00:16:44.433 --rc geninfo_unexecuted_blocks=1 00:16:44.433 00:16:44.433 ' 00:16:44.433 10:03:43 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:44.433 10:03:43 -- nvmf/common.sh@7 -- # uname -s 00:16:44.433 10:03:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.433 10:03:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.433 10:03:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.433 10:03:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.433 10:03:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.433 10:03:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.433 10:03:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.433 10:03:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.433 10:03:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.433 10:03:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.433 10:03:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:16:44.433 10:03:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:16:44.433 10:03:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.433 10:03:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.434 10:03:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:44.434 10:03:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:44.434 10:03:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.434 10:03:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.434 10:03:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.434 10:03:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.434 10:03:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.434 10:03:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.434 10:03:43 -- paths/export.sh@5 -- # export PATH 00:16:44.434 10:03:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.434 10:03:43 -- nvmf/common.sh@46 -- # : 0 00:16:44.434 10:03:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:44.434 10:03:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:44.434 10:03:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:44.434 10:03:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.434 10:03:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.434 10:03:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:44.434 10:03:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:44.434 10:03:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:44.434 10:03:43 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:44.434 10:03:43 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:44.434 10:03:43 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:44.434 10:03:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:44.434 10:03:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.434 10:03:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:44.434 10:03:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:44.434 10:03:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:44.434 10:03:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.434 10:03:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.434 10:03:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.693 10:03:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:44.693 10:03:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:44.693 10:03:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:44.693 10:03:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:44.693 10:03:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:44.693 10:03:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:44.693 10:03:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.693 10:03:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.693 10:03:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:44.693 10:03:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:44.693 10:03:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:44.693 10:03:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:44.693 10:03:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:44.693 10:03:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.693 10:03:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:44.693 10:03:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:44.693 10:03:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:44.693 10:03:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:44.693 10:03:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:44.693 10:03:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:44.693 Cannot find device "nvmf_tgt_br" 00:16:44.693 10:03:43 -- nvmf/common.sh@154 -- # true 00:16:44.693 10:03:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:44.693 Cannot find device "nvmf_tgt_br2" 00:16:44.693 10:03:43 -- nvmf/common.sh@155 -- # true 00:16:44.693 10:03:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:44.693 10:03:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:44.693 Cannot find device "nvmf_tgt_br" 00:16:44.693 10:03:43 -- nvmf/common.sh@157 -- # true 00:16:44.693 10:03:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:44.693 Cannot find device "nvmf_tgt_br2" 00:16:44.693 10:03:43 -- nvmf/common.sh@158 -- # true 00:16:44.693 10:03:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:44.693 10:03:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:44.693 10:03:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:44.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.693 10:03:43 -- nvmf/common.sh@161 -- # true 00:16:44.693 10:03:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:44.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.693 10:03:43 -- nvmf/common.sh@162 -- # true 00:16:44.693 10:03:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:44.693 10:03:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:44.693 10:03:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:44.693 10:03:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:44.693 10:03:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:44.693 10:03:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:44.693 10:03:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:44.693 10:03:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:44.693 10:03:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:44.693 10:03:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:44.693 10:03:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:44.693 10:03:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:44.693 10:03:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:44.693 10:03:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:44.693 10:03:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:44.693 10:03:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:44.951 10:03:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:44.951 10:03:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:44.951 10:03:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:44.951 10:03:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:44.951 10:03:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:44.951 10:03:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:44.951 10:03:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:44.951 10:03:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:44.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:16:44.951 00:16:44.951 --- 10.0.0.2 ping statistics --- 00:16:44.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.951 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:44.951 10:03:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:44.951 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:44.951 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:16:44.951 00:16:44.951 --- 10.0.0.3 ping statistics --- 00:16:44.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.951 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:44.951 10:03:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:44.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:44.951 00:16:44.951 --- 10.0.0.1 ping statistics --- 00:16:44.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.951 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:44.951 10:03:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.951 10:03:43 -- nvmf/common.sh@421 -- # return 0 00:16:44.951 10:03:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:44.951 10:03:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.951 10:03:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:44.951 10:03:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:44.951 10:03:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.951 10:03:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:44.951 10:03:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:44.951 10:03:43 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:44.951 10:03:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:44.951 10:03:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:44.951 10:03:43 -- common/autotest_common.sh@10 -- # set +x 00:16:44.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.951 10:03:43 -- nvmf/common.sh@469 -- # nvmfpid=88045 00:16:44.951 10:03:43 -- nvmf/common.sh@470 -- # waitforlisten 88045 00:16:44.951 10:03:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:44.952 10:03:43 -- common/autotest_common.sh@829 -- # '[' -z 88045 ']' 00:16:44.952 10:03:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.952 10:03:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.952 10:03:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.952 10:03:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.952 10:03:43 -- common/autotest_common.sh@10 -- # set +x 00:16:44.952 [2024-12-16 10:03:43.450938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:44.952 [2024-12-16 10:03:43.451037] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:45.210 [2024-12-16 10:03:43.597110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.210 [2024-12-16 10:03:43.711239] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:45.210 [2024-12-16 10:03:43.711422] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.210 [2024-12-16 10:03:43.711440] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.210 [2024-12-16 10:03:43.711452] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.210 [2024-12-16 10:03:43.712014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:45.210 [2024-12-16 10:03:43.712228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:45.210 [2024-12-16 10:03:43.712341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:45.210 [2024-12-16 10:03:43.712350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.147 10:03:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.147 10:03:44 -- common/autotest_common.sh@862 -- # return 0 00:16:46.147 10:03:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:46.147 10:03:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:46.147 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:16:46.147 10:03:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.147 10:03:44 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.147 10:03:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.147 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:16:46.147 [2024-12-16 10:03:44.499025] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.147 10:03:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.147 10:03:44 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:46.147 10:03:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.147 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:16:46.147 Malloc0 00:16:46.147 10:03:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.147 10:03:44 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:46.147 10:03:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.147 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:16:46.147 10:03:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.147 10:03:44 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:46.147 10:03:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.148 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:16:46.148 10:03:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.148 10:03:44 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.148 10:03:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.148 10:03:44 -- common/autotest_common.sh@10 -- # set +x 00:16:46.148 [2024-12-16 10:03:44.541630] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.148 10:03:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.148 10:03:44 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:46.148 10:03:44 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:46.148 10:03:44 -- nvmf/common.sh@520 -- # config=() 00:16:46.148 10:03:44 -- nvmf/common.sh@520 -- # local subsystem config 00:16:46.148 10:03:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:46.148 10:03:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:46.148 { 00:16:46.148 "params": { 00:16:46.148 "name": "Nvme$subsystem", 00:16:46.148 "trtype": "$TEST_TRANSPORT", 00:16:46.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:46.148 "adrfam": "ipv4", 00:16:46.148 "trsvcid": "$NVMF_PORT", 00:16:46.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:46.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:46.148 "hdgst": ${hdgst:-false}, 00:16:46.148 "ddgst": ${ddgst:-false} 00:16:46.148 }, 00:16:46.148 "method": "bdev_nvme_attach_controller" 00:16:46.148 } 00:16:46.148 EOF 00:16:46.148 )") 00:16:46.148 10:03:44 -- nvmf/common.sh@542 -- # cat 00:16:46.148 10:03:44 -- nvmf/common.sh@544 -- # jq . 00:16:46.148 10:03:44 -- nvmf/common.sh@545 -- # IFS=, 00:16:46.148 10:03:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:46.148 "params": { 00:16:46.148 "name": "Nvme1", 00:16:46.148 "trtype": "tcp", 00:16:46.148 "traddr": "10.0.0.2", 00:16:46.148 "adrfam": "ipv4", 00:16:46.148 "trsvcid": "4420", 00:16:46.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:46.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:46.148 "hdgst": false, 00:16:46.148 "ddgst": false 00:16:46.148 }, 00:16:46.148 "method": "bdev_nvme_attach_controller" 00:16:46.148 }' 00:16:46.148 [2024-12-16 10:03:44.599274] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:46.148 [2024-12-16 10:03:44.599393] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid88102 ] 00:16:46.148 [2024-12-16 10:03:44.741048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:46.406 [2024-12-16 10:03:44.879751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.406 [2024-12-16 10:03:44.879889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.406 [2024-12-16 10:03:44.879896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.665 [2024-12-16 10:03:45.062987] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:46.665 [2024-12-16 10:03:45.063047] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:46.665 I/O targets: 00:16:46.665 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:46.665 00:16:46.665 00:16:46.665 CUnit - A unit testing framework for C - Version 2.1-3 00:16:46.665 http://cunit.sourceforge.net/ 00:16:46.665 00:16:46.665 00:16:46.665 Suite: bdevio tests on: Nvme1n1 00:16:46.665 Test: blockdev write read block ...passed 00:16:46.665 Test: blockdev write zeroes read block ...passed 00:16:46.665 Test: blockdev write zeroes read no split ...passed 00:16:46.665 Test: blockdev write zeroes read split ...passed 00:16:46.665 Test: blockdev write zeroes read split partial ...passed 00:16:46.665 Test: blockdev reset ...[2024-12-16 10:03:45.190522] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:46.665 [2024-12-16 10:03:45.190606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf5820 (9): Bad file descriptor 00:16:46.665 [2024-12-16 10:03:45.211030] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:46.665 passed 00:16:46.665 Test: blockdev write read 8 blocks ...passed 00:16:46.665 Test: blockdev write read size > 128k ...passed 00:16:46.665 Test: blockdev write read invalid size ...passed 00:16:46.665 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:46.665 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:46.665 Test: blockdev write read max offset ...passed 00:16:46.925 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:46.925 Test: blockdev writev readv 8 blocks ...passed 00:16:46.925 Test: blockdev writev readv 30 x 1block ...passed 00:16:46.925 Test: blockdev writev readv block ...passed 00:16:46.925 Test: blockdev writev readv size > 128k ...passed 00:16:46.925 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:46.925 Test: blockdev comparev and writev ...[2024-12-16 10:03:45.383783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.925 [2024-12-16 10:03:45.383839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.925 [2024-12-16 10:03:45.383889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.925 [2024-12-16 10:03:45.383902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.925 [2024-12-16 10:03:45.384365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.925 [2024-12-16 10:03:45.384402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:46.925 [2024-12-16 10:03:45.384419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.925 [2024-12-16 10:03:45.384429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:46.925 [2024-12-16 10:03:45.384767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.925 [2024-12-16 10:03:45.384795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:46.925 [2024-12-16 10:03:45.384812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.925 [2024-12-16 10:03:45.384823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:46.925 [2024-12-16 10:03:45.385311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.925 [2024-12-16 10:03:45.385339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:46.925 [2024-12-16 10:03:45.385367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.925 [2024-12-16 10:03:45.385380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:46.925 passed 00:16:46.925 Test: blockdev nvme passthru rw ...passed 00:16:46.925 Test: blockdev nvme passthru vendor specific ...[2024-12-16 10:03:45.467664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.925 [2024-12-16 10:03:45.467692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:46.925 [2024-12-16 10:03:45.467824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.925 [2024-12-16 10:03:45.467840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:46.925 [2024-12-16 10:03:45.467949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.925 [2024-12-16 10:03:45.467964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:46.925 [2024-12-16 10:03:45.468076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.925 [2024-12-16 10:03:45.468090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:46.925 passed 00:16:46.925 Test: blockdev nvme admin passthru ...passed 00:16:46.925 Test: blockdev copy ...passed 00:16:46.925 00:16:46.925 Run Summary: Type Total Ran Passed Failed Inactive 00:16:46.925 suites 1 1 n/a 0 0 00:16:46.925 tests 23 23 23 0 0 00:16:46.925 asserts 152 152 152 0 n/a 00:16:46.925 00:16:46.925 Elapsed time = 0.922 seconds 00:16:47.508 10:03:45 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.508 10:03:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.508 10:03:45 -- common/autotest_common.sh@10 -- # set +x 00:16:47.508 10:03:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.508 10:03:45 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:47.508 10:03:45 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:47.508 10:03:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:47.508 10:03:45 -- nvmf/common.sh@116 -- # sync 00:16:47.508 10:03:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:47.508 10:03:45 -- nvmf/common.sh@119 -- # set +e 00:16:47.508 10:03:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:47.508 10:03:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:47.508 rmmod nvme_tcp 00:16:47.508 rmmod nvme_fabrics 00:16:47.508 rmmod nvme_keyring 00:16:47.508 10:03:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:47.508 10:03:46 -- nvmf/common.sh@123 -- # set -e 00:16:47.508 10:03:46 -- nvmf/common.sh@124 -- # return 0 00:16:47.508 10:03:46 -- nvmf/common.sh@477 -- # '[' -n 88045 ']' 00:16:47.508 10:03:46 -- nvmf/common.sh@478 -- # killprocess 88045 00:16:47.508 10:03:46 -- common/autotest_common.sh@936 -- # '[' -z 88045 ']' 00:16:47.508 10:03:46 -- common/autotest_common.sh@940 -- # kill -0 88045 00:16:47.508 10:03:46 -- common/autotest_common.sh@941 -- # uname 00:16:47.508 10:03:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:47.508 10:03:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88045 00:16:47.508 10:03:46 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:47.508 10:03:46 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:47.508 killing process with pid 88045 00:16:47.508 10:03:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88045' 00:16:47.508 10:03:46 -- common/autotest_common.sh@955 -- # kill 88045 00:16:47.508 10:03:46 -- common/autotest_common.sh@960 -- # wait 88045 00:16:48.116 10:03:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:48.116 10:03:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:48.116 10:03:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:48.116 10:03:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.116 10:03:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:48.116 10:03:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.116 10:03:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.116 10:03:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.116 10:03:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:48.116 ************************************ 00:16:48.116 END TEST nvmf_bdevio_no_huge 00:16:48.116 ************************************ 00:16:48.116 00:16:48.116 real 0m3.581s 00:16:48.116 user 0m12.738s 00:16:48.116 sys 0m1.327s 00:16:48.116 10:03:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:48.116 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:16:48.116 10:03:46 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:48.116 10:03:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:48.116 10:03:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:48.116 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:16:48.116 ************************************ 00:16:48.116 START TEST nvmf_tls 00:16:48.116 ************************************ 00:16:48.116 10:03:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:48.116 * Looking for test storage... 00:16:48.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:48.116 10:03:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:48.116 10:03:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:48.116 10:03:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:48.116 10:03:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:48.116 10:03:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:48.116 10:03:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:48.116 10:03:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:48.116 10:03:46 -- scripts/common.sh@335 -- # IFS=.-: 00:16:48.116 10:03:46 -- scripts/common.sh@335 -- # read -ra ver1 00:16:48.116 10:03:46 -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.116 10:03:46 -- scripts/common.sh@336 -- # read -ra ver2 00:16:48.116 10:03:46 -- scripts/common.sh@337 -- # local 'op=<' 00:16:48.116 10:03:46 -- scripts/common.sh@339 -- # ver1_l=2 00:16:48.116 10:03:46 -- scripts/common.sh@340 -- # ver2_l=1 00:16:48.116 10:03:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:48.116 10:03:46 -- scripts/common.sh@343 -- # case "$op" in 00:16:48.116 10:03:46 -- scripts/common.sh@344 -- # : 1 00:16:48.116 10:03:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:48.116 10:03:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.116 10:03:46 -- scripts/common.sh@364 -- # decimal 1 00:16:48.116 10:03:46 -- scripts/common.sh@352 -- # local d=1 00:16:48.116 10:03:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.116 10:03:46 -- scripts/common.sh@354 -- # echo 1 00:16:48.116 10:03:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:48.116 10:03:46 -- scripts/common.sh@365 -- # decimal 2 00:16:48.116 10:03:46 -- scripts/common.sh@352 -- # local d=2 00:16:48.116 10:03:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.116 10:03:46 -- scripts/common.sh@354 -- # echo 2 00:16:48.116 10:03:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:48.116 10:03:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:48.116 10:03:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:48.116 10:03:46 -- scripts/common.sh@367 -- # return 0 00:16:48.116 10:03:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.116 10:03:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:48.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.116 --rc genhtml_branch_coverage=1 00:16:48.116 --rc genhtml_function_coverage=1 00:16:48.116 --rc genhtml_legend=1 00:16:48.116 --rc geninfo_all_blocks=1 00:16:48.116 --rc geninfo_unexecuted_blocks=1 00:16:48.116 00:16:48.116 ' 00:16:48.116 10:03:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:48.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.116 --rc genhtml_branch_coverage=1 00:16:48.116 --rc genhtml_function_coverage=1 00:16:48.116 --rc genhtml_legend=1 00:16:48.116 --rc geninfo_all_blocks=1 00:16:48.116 --rc geninfo_unexecuted_blocks=1 00:16:48.116 00:16:48.116 ' 00:16:48.116 10:03:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:48.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.116 --rc genhtml_branch_coverage=1 00:16:48.116 --rc genhtml_function_coverage=1 00:16:48.116 --rc genhtml_legend=1 00:16:48.116 --rc geninfo_all_blocks=1 00:16:48.116 --rc geninfo_unexecuted_blocks=1 00:16:48.116 00:16:48.116 ' 00:16:48.116 10:03:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:48.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.116 --rc genhtml_branch_coverage=1 00:16:48.116 --rc genhtml_function_coverage=1 00:16:48.116 --rc genhtml_legend=1 00:16:48.116 --rc geninfo_all_blocks=1 00:16:48.116 --rc geninfo_unexecuted_blocks=1 00:16:48.116 00:16:48.116 ' 00:16:48.116 10:03:46 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:48.116 10:03:46 -- nvmf/common.sh@7 -- # uname -s 00:16:48.116 10:03:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.116 10:03:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.116 10:03:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.116 10:03:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.116 10:03:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.116 10:03:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.116 10:03:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.116 10:03:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.116 10:03:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.116 10:03:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.116 10:03:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:16:48.116 10:03:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:16:48.116 10:03:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.117 10:03:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.117 10:03:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:48.117 10:03:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:48.117 10:03:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.117 10:03:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.117 10:03:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.117 10:03:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.117 10:03:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.117 10:03:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.117 10:03:46 -- paths/export.sh@5 -- # export PATH 00:16:48.117 10:03:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.117 10:03:46 -- nvmf/common.sh@46 -- # : 0 00:16:48.117 10:03:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:48.117 10:03:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:48.117 10:03:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:48.117 10:03:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.117 10:03:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.117 10:03:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:48.117 10:03:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:48.117 10:03:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:48.117 10:03:46 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.117 10:03:46 -- target/tls.sh@71 -- # nvmftestinit 00:16:48.117 10:03:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:48.117 10:03:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.117 10:03:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:48.117 10:03:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:48.117 10:03:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:48.117 10:03:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.117 10:03:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.117 10:03:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.117 10:03:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:48.117 10:03:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:48.117 10:03:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:48.117 10:03:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:48.117 10:03:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:48.117 10:03:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:48.117 10:03:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.117 10:03:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.117 10:03:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:48.117 10:03:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:48.117 10:03:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:48.117 10:03:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:48.117 10:03:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:48.117 10:03:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.117 10:03:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:48.117 10:03:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:48.117 10:03:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:48.117 10:03:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:48.117 10:03:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:48.376 10:03:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:48.376 Cannot find device "nvmf_tgt_br" 00:16:48.376 10:03:46 -- nvmf/common.sh@154 -- # true 00:16:48.376 10:03:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:48.376 Cannot find device "nvmf_tgt_br2" 00:16:48.376 10:03:46 -- nvmf/common.sh@155 -- # true 00:16:48.376 10:03:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:48.376 10:03:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:48.376 Cannot find device "nvmf_tgt_br" 00:16:48.376 10:03:46 -- nvmf/common.sh@157 -- # true 00:16:48.376 10:03:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:48.376 Cannot find device "nvmf_tgt_br2" 00:16:48.376 10:03:46 -- nvmf/common.sh@158 -- # true 00:16:48.376 10:03:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:48.376 10:03:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:48.376 10:03:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.376 10:03:46 -- nvmf/common.sh@161 -- # true 00:16:48.376 10:03:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.376 10:03:46 -- nvmf/common.sh@162 -- # true 00:16:48.376 10:03:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:48.376 10:03:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:48.376 10:03:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:48.376 10:03:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:48.376 10:03:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:48.376 10:03:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:48.376 10:03:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:48.376 10:03:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:48.376 10:03:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:48.376 10:03:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:48.376 10:03:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:48.376 10:03:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:48.376 10:03:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:48.376 10:03:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:48.376 10:03:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:48.376 10:03:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:48.376 10:03:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:48.376 10:03:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:48.376 10:03:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:48.635 10:03:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:48.635 10:03:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:48.635 10:03:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:48.635 10:03:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:48.635 10:03:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:48.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:16:48.635 00:16:48.635 --- 10.0.0.2 ping statistics --- 00:16:48.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.635 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:16:48.635 10:03:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:48.635 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:48.635 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:16:48.635 00:16:48.635 --- 10.0.0.3 ping statistics --- 00:16:48.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.635 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:48.635 10:03:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:48.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:16:48.635 00:16:48.635 --- 10.0.0.1 ping statistics --- 00:16:48.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.635 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:48.635 10:03:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.635 10:03:47 -- nvmf/common.sh@421 -- # return 0 00:16:48.635 10:03:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:48.635 10:03:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.635 10:03:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:48.635 10:03:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:48.635 10:03:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.635 10:03:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:48.635 10:03:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:48.635 10:03:47 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:48.635 10:03:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:48.635 10:03:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.635 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:16:48.635 10:03:47 -- nvmf/common.sh@469 -- # nvmfpid=88290 00:16:48.635 10:03:47 -- nvmf/common.sh@470 -- # waitforlisten 88290 00:16:48.635 10:03:47 -- common/autotest_common.sh@829 -- # '[' -z 88290 ']' 00:16:48.635 10:03:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.635 10:03:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:48.635 10:03:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.635 10:03:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.635 10:03:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.635 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:16:48.635 [2024-12-16 10:03:47.112443] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:48.635 [2024-12-16 10:03:47.112703] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.635 [2024-12-16 10:03:47.251983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.894 [2024-12-16 10:03:47.326813] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:48.894 [2024-12-16 10:03:47.326971] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.894 [2024-12-16 10:03:47.326988] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.894 [2024-12-16 10:03:47.327000] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.894 [2024-12-16 10:03:47.327029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.461 10:03:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.461 10:03:48 -- common/autotest_common.sh@862 -- # return 0 00:16:49.461 10:03:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:49.461 10:03:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:49.461 10:03:48 -- common/autotest_common.sh@10 -- # set +x 00:16:49.720 10:03:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.720 10:03:48 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:49.720 10:03:48 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:49.979 true 00:16:49.979 10:03:48 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:49.979 10:03:48 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:50.238 10:03:48 -- target/tls.sh@82 -- # version=0 00:16:50.238 10:03:48 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:50.238 10:03:48 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:50.496 10:03:48 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.496 10:03:48 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:50.755 10:03:49 -- target/tls.sh@90 -- # version=13 00:16:50.755 10:03:49 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:50.755 10:03:49 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:50.755 10:03:49 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.755 10:03:49 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:51.014 10:03:49 -- target/tls.sh@98 -- # version=7 00:16:51.014 10:03:49 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:51.014 10:03:49 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:51.014 10:03:49 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:51.272 10:03:49 -- target/tls.sh@105 -- # ktls=false 00:16:51.272 10:03:49 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:51.272 10:03:49 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:51.531 10:03:50 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:51.531 10:03:50 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:51.789 10:03:50 -- target/tls.sh@113 -- # ktls=true 00:16:51.789 10:03:50 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:51.789 10:03:50 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:52.047 10:03:50 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:52.047 10:03:50 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:52.306 10:03:50 -- target/tls.sh@121 -- # ktls=false 00:16:52.306 10:03:50 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:52.306 10:03:50 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:52.306 10:03:50 -- target/tls.sh@49 -- # local key hash crc 00:16:52.306 10:03:50 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:52.306 10:03:50 -- target/tls.sh@51 -- # hash=01 00:16:52.306 10:03:50 -- target/tls.sh@52 -- # head -c 4 00:16:52.306 10:03:50 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:52.306 10:03:50 -- target/tls.sh@52 -- # gzip -1 -c 00:16:52.306 10:03:50 -- target/tls.sh@52 -- # tail -c8 00:16:52.306 10:03:50 -- target/tls.sh@52 -- # crc='p$H�' 00:16:52.306 10:03:50 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:52.306 10:03:50 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:52.306 10:03:50 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:52.306 10:03:50 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:52.306 10:03:50 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:52.306 10:03:50 -- target/tls.sh@49 -- # local key hash crc 00:16:52.306 10:03:50 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:52.306 10:03:50 -- target/tls.sh@51 -- # hash=01 00:16:52.306 10:03:50 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:52.306 10:03:50 -- target/tls.sh@52 -- # gzip -1 -c 00:16:52.306 10:03:50 -- target/tls.sh@52 -- # tail -c8 00:16:52.306 10:03:50 -- target/tls.sh@52 -- # head -c 4 00:16:52.306 10:03:50 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:52.306 10:03:50 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:52.306 10:03:50 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:52.306 10:03:50 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:52.306 10:03:50 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:52.306 10:03:50 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:52.306 10:03:50 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:52.306 10:03:50 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:52.306 10:03:50 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:52.306 10:03:50 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:52.306 10:03:50 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:52.306 10:03:50 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:52.565 10:03:51 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:52.823 10:03:51 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:52.823 10:03:51 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:52.823 10:03:51 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:53.082 [2024-12-16 10:03:51.614624] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.082 10:03:51 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:53.340 10:03:51 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:53.599 [2024-12-16 10:03:52.010673] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:53.599 [2024-12-16 10:03:52.010928] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.599 10:03:52 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:53.858 malloc0 00:16:53.858 10:03:52 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:54.117 10:03:52 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:54.375 10:03:52 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:04.352 Initializing NVMe Controllers 00:17:04.352 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:04.352 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:04.352 Initialization complete. Launching workers. 00:17:04.352 ======================================================== 00:17:04.352 Latency(us) 00:17:04.352 Device Information : IOPS MiB/s Average min max 00:17:04.352 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12077.89 47.18 5299.74 1716.46 9143.94 00:17:04.352 ======================================================== 00:17:04.352 Total : 12077.89 47.18 5299.74 1716.46 9143.94 00:17:04.352 00:17:04.352 10:04:02 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:04.352 10:04:02 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:04.352 10:04:02 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:04.352 10:04:02 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:04.352 10:04:02 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:04.352 10:04:02 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:04.352 10:04:02 -- target/tls.sh@28 -- # bdevperf_pid=88659 00:17:04.352 10:04:02 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:04.352 10:04:02 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:04.352 10:04:02 -- target/tls.sh@31 -- # waitforlisten 88659 /var/tmp/bdevperf.sock 00:17:04.352 10:04:02 -- common/autotest_common.sh@829 -- # '[' -z 88659 ']' 00:17:04.352 10:04:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:04.352 10:04:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:04.352 10:04:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:04.352 10:04:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.352 10:04:02 -- common/autotest_common.sh@10 -- # set +x 00:17:04.611 [2024-12-16 10:04:03.020261] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:04.611 [2024-12-16 10:04:03.020592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88659 ] 00:17:04.611 [2024-12-16 10:04:03.160059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.611 [2024-12-16 10:04:03.232237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.547 10:04:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.547 10:04:03 -- common/autotest_common.sh@862 -- # return 0 00:17:05.547 10:04:03 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:05.805 [2024-12-16 10:04:04.234259] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:05.805 TLSTESTn1 00:17:05.805 10:04:04 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:05.805 Running I/O for 10 seconds... 00:17:18.011 00:17:18.011 Latency(us) 00:17:18.011 [2024-12-16T10:04:16.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.011 [2024-12-16T10:04:16.636Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:18.011 Verification LBA range: start 0x0 length 0x2000 00:17:18.011 TLSTESTn1 : 10.02 6514.34 25.45 0.00 0.00 19616.52 4051.32 16801.05 00:17:18.011 [2024-12-16T10:04:16.636Z] =================================================================================================================== 00:17:18.011 [2024-12-16T10:04:16.637Z] Total : 6514.34 25.45 0.00 0.00 19616.52 4051.32 16801.05 00:17:18.012 0 00:17:18.012 10:04:14 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:18.012 10:04:14 -- target/tls.sh@45 -- # killprocess 88659 00:17:18.012 10:04:14 -- common/autotest_common.sh@936 -- # '[' -z 88659 ']' 00:17:18.012 10:04:14 -- common/autotest_common.sh@940 -- # kill -0 88659 00:17:18.012 10:04:14 -- common/autotest_common.sh@941 -- # uname 00:17:18.012 10:04:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.012 10:04:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88659 00:17:18.012 10:04:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:18.012 killing process with pid 88659 00:17:18.012 10:04:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:18.012 10:04:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88659' 00:17:18.012 Received shutdown signal, test time was about 10.000000 seconds 00:17:18.012 00:17:18.012 Latency(us) 00:17:18.012 [2024-12-16T10:04:16.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.012 [2024-12-16T10:04:16.637Z] =================================================================================================================== 00:17:18.012 [2024-12-16T10:04:16.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:18.012 10:04:14 -- common/autotest_common.sh@955 -- # kill 88659 00:17:18.012 10:04:14 -- common/autotest_common.sh@960 -- # wait 88659 00:17:18.012 10:04:14 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:18.012 10:04:14 -- common/autotest_common.sh@650 -- # local es=0 00:17:18.012 10:04:14 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:18.012 10:04:14 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:18.012 10:04:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.012 10:04:14 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:18.012 10:04:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.012 10:04:14 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:18.012 10:04:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:18.012 10:04:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:18.012 10:04:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:18.012 10:04:14 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:18.012 10:04:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:18.012 10:04:14 -- target/tls.sh@28 -- # bdevperf_pid=88808 00:17:18.012 10:04:14 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:18.012 10:04:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.012 10:04:14 -- target/tls.sh@31 -- # waitforlisten 88808 /var/tmp/bdevperf.sock 00:17:18.012 10:04:14 -- common/autotest_common.sh@829 -- # '[' -z 88808 ']' 00:17:18.012 10:04:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.012 10:04:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.012 10:04:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.012 10:04:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.012 10:04:14 -- common/autotest_common.sh@10 -- # set +x 00:17:18.012 [2024-12-16 10:04:14.725827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:18.012 [2024-12-16 10:04:14.725949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88808 ] 00:17:18.012 [2024-12-16 10:04:14.866242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.012 [2024-12-16 10:04:14.930569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.012 10:04:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.012 10:04:15 -- common/autotest_common.sh@862 -- # return 0 00:17:18.012 10:04:15 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:18.012 [2024-12-16 10:04:15.894052] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:18.012 [2024-12-16 10:04:15.902281] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:18.012 [2024-12-16 10:04:15.902548] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd13cc0 (107): Transport endpoint is not connected 00:17:18.012 [2024-12-16 10:04:15.903538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd13cc0 (9): Bad file descriptor 00:17:18.012 [2024-12-16 10:04:15.904535] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:18.012 [2024-12-16 10:04:15.904572] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:18.012 [2024-12-16 10:04:15.904581] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:18.012 2024/12/16 10:04:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:18.012 request: 00:17:18.012 { 00:17:18.012 "method": "bdev_nvme_attach_controller", 00:17:18.012 "params": { 00:17:18.012 "name": "TLSTEST", 00:17:18.012 "trtype": "tcp", 00:17:18.012 "traddr": "10.0.0.2", 00:17:18.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.012 "adrfam": "ipv4", 00:17:18.012 "trsvcid": "4420", 00:17:18.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.012 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:18.012 } 00:17:18.012 } 00:17:18.012 Got JSON-RPC error response 00:17:18.012 GoRPCClient: error on JSON-RPC call 00:17:18.012 10:04:15 -- target/tls.sh@36 -- # killprocess 88808 00:17:18.012 10:04:15 -- common/autotest_common.sh@936 -- # '[' -z 88808 ']' 00:17:18.012 10:04:15 -- common/autotest_common.sh@940 -- # kill -0 88808 00:17:18.012 10:04:15 -- common/autotest_common.sh@941 -- # uname 00:17:18.012 10:04:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.012 10:04:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88808 00:17:18.012 10:04:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:18.012 killing process with pid 88808 00:17:18.012 10:04:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:18.012 10:04:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88808' 00:17:18.012 10:04:15 -- common/autotest_common.sh@955 -- # kill 88808 00:17:18.012 Received shutdown signal, test time was about 10.000000 seconds 00:17:18.012 00:17:18.012 Latency(us) 00:17:18.012 [2024-12-16T10:04:16.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.012 [2024-12-16T10:04:16.637Z] =================================================================================================================== 00:17:18.012 [2024-12-16T10:04:16.637Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:18.012 10:04:15 -- common/autotest_common.sh@960 -- # wait 88808 00:17:18.012 10:04:16 -- target/tls.sh@37 -- # return 1 00:17:18.012 10:04:16 -- common/autotest_common.sh@653 -- # es=1 00:17:18.012 10:04:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:18.012 10:04:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:18.012 10:04:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:18.012 10:04:16 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:18.012 10:04:16 -- common/autotest_common.sh@650 -- # local es=0 00:17:18.012 10:04:16 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:18.012 10:04:16 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:18.012 10:04:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.012 10:04:16 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:18.012 10:04:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.012 10:04:16 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:18.012 10:04:16 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:18.012 10:04:16 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:18.012 10:04:16 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:18.012 10:04:16 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:18.012 10:04:16 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:18.012 10:04:16 -- target/tls.sh@28 -- # bdevperf_pid=88855 00:17:18.012 10:04:16 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:18.012 10:04:16 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.012 10:04:16 -- target/tls.sh@31 -- # waitforlisten 88855 /var/tmp/bdevperf.sock 00:17:18.012 10:04:16 -- common/autotest_common.sh@829 -- # '[' -z 88855 ']' 00:17:18.012 10:04:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.012 10:04:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.012 10:04:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.012 10:04:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.012 10:04:16 -- common/autotest_common.sh@10 -- # set +x 00:17:18.012 [2024-12-16 10:04:16.203025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:18.012 [2024-12-16 10:04:16.203131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88855 ] 00:17:18.012 [2024-12-16 10:04:16.331549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.012 [2024-12-16 10:04:16.387136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.949 10:04:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.949 10:04:17 -- common/autotest_common.sh@862 -- # return 0 00:17:18.949 10:04:17 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:18.949 [2024-12-16 10:04:17.450676] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:18.949 [2024-12-16 10:04:17.462387] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:18.949 [2024-12-16 10:04:17.462458] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:18.949 [2024-12-16 10:04:17.462506] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:18.949 [2024-12-16 10:04:17.463190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d7cc0 (107): Transport endpoint is not connected 00:17:18.949 [2024-12-16 10:04:17.464181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d7cc0 (9): Bad file descriptor 00:17:18.949 [2024-12-16 10:04:17.465178] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:18.949 [2024-12-16 10:04:17.465212] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:18.949 [2024-12-16 10:04:17.465238] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:18.949 2024/12/16 10:04:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:18.949 request: 00:17:18.949 { 00:17:18.949 "method": "bdev_nvme_attach_controller", 00:17:18.949 "params": { 00:17:18.949 "name": "TLSTEST", 00:17:18.949 "trtype": "tcp", 00:17:18.949 "traddr": "10.0.0.2", 00:17:18.949 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:18.949 "adrfam": "ipv4", 00:17:18.949 "trsvcid": "4420", 00:17:18.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.949 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:18.949 } 00:17:18.949 } 00:17:18.949 Got JSON-RPC error response 00:17:18.949 GoRPCClient: error on JSON-RPC call 00:17:18.949 10:04:17 -- target/tls.sh@36 -- # killprocess 88855 00:17:18.949 10:04:17 -- common/autotest_common.sh@936 -- # '[' -z 88855 ']' 00:17:18.949 10:04:17 -- common/autotest_common.sh@940 -- # kill -0 88855 00:17:18.949 10:04:17 -- common/autotest_common.sh@941 -- # uname 00:17:18.949 10:04:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.949 10:04:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88855 00:17:18.949 killing process with pid 88855 00:17:18.949 Received shutdown signal, test time was about 10.000000 seconds 00:17:18.949 00:17:18.949 Latency(us) 00:17:18.949 [2024-12-16T10:04:17.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.949 [2024-12-16T10:04:17.574Z] =================================================================================================================== 00:17:18.949 [2024-12-16T10:04:17.574Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:18.949 10:04:17 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:18.949 10:04:17 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:18.949 10:04:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88855' 00:17:18.949 10:04:17 -- common/autotest_common.sh@955 -- # kill 88855 00:17:18.949 10:04:17 -- common/autotest_common.sh@960 -- # wait 88855 00:17:19.208 10:04:17 -- target/tls.sh@37 -- # return 1 00:17:19.208 10:04:17 -- common/autotest_common.sh@653 -- # es=1 00:17:19.208 10:04:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:19.208 10:04:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:19.208 10:04:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:19.208 10:04:17 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:19.208 10:04:17 -- common/autotest_common.sh@650 -- # local es=0 00:17:19.208 10:04:17 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:19.208 10:04:17 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:19.208 10:04:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.208 10:04:17 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:19.208 10:04:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.208 10:04:17 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:19.208 10:04:17 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:19.208 10:04:17 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:19.208 10:04:17 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:19.208 10:04:17 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:19.208 10:04:17 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:19.208 10:04:17 -- target/tls.sh@28 -- # bdevperf_pid=88895 00:17:19.208 10:04:17 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:19.208 10:04:17 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:19.208 10:04:17 -- target/tls.sh@31 -- # waitforlisten 88895 /var/tmp/bdevperf.sock 00:17:19.208 10:04:17 -- common/autotest_common.sh@829 -- # '[' -z 88895 ']' 00:17:19.208 10:04:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.208 10:04:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.208 10:04:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.208 10:04:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.208 10:04:17 -- common/autotest_common.sh@10 -- # set +x 00:17:19.208 [2024-12-16 10:04:17.753583] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:19.208 [2024-12-16 10:04:17.753701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88895 ] 00:17:19.467 [2024-12-16 10:04:17.893732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.467 [2024-12-16 10:04:17.965679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.402 10:04:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.402 10:04:18 -- common/autotest_common.sh@862 -- # return 0 00:17:20.402 10:04:18 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:20.402 [2024-12-16 10:04:19.003247] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:20.402 [2024-12-16 10:04:19.012570] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:20.402 [2024-12-16 10:04:19.012626] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:20.402 [2024-12-16 10:04:19.012697] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:20.402 [2024-12-16 10:04:19.012702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1199cc0 (107): Transport endpoint is not connected 00:17:20.402 [2024-12-16 10:04:19.013693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1199cc0 (9): Bad file descriptor 00:17:20.402 [2024-12-16 10:04:19.014704] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:20.402 [2024-12-16 10:04:19.014741] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:20.402 [2024-12-16 10:04:19.014767] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:20.402 2024/12/16 10:04:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:20.402 request: 00:17:20.402 { 00:17:20.402 "method": "bdev_nvme_attach_controller", 00:17:20.402 "params": { 00:17:20.402 "name": "TLSTEST", 00:17:20.402 "trtype": "tcp", 00:17:20.402 "traddr": "10.0.0.2", 00:17:20.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:20.402 "adrfam": "ipv4", 00:17:20.402 "trsvcid": "4420", 00:17:20.402 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:20.402 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:20.402 } 00:17:20.403 } 00:17:20.403 Got JSON-RPC error response 00:17:20.403 GoRPCClient: error on JSON-RPC call 00:17:20.661 10:04:19 -- target/tls.sh@36 -- # killprocess 88895 00:17:20.662 10:04:19 -- common/autotest_common.sh@936 -- # '[' -z 88895 ']' 00:17:20.662 10:04:19 -- common/autotest_common.sh@940 -- # kill -0 88895 00:17:20.662 10:04:19 -- common/autotest_common.sh@941 -- # uname 00:17:20.662 10:04:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:20.662 10:04:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88895 00:17:20.662 killing process with pid 88895 00:17:20.662 Received shutdown signal, test time was about 10.000000 seconds 00:17:20.662 00:17:20.662 Latency(us) 00:17:20.662 [2024-12-16T10:04:19.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.662 [2024-12-16T10:04:19.287Z] =================================================================================================================== 00:17:20.662 [2024-12-16T10:04:19.287Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:20.662 10:04:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:20.662 10:04:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:20.662 10:04:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88895' 00:17:20.662 10:04:19 -- common/autotest_common.sh@955 -- # kill 88895 00:17:20.662 10:04:19 -- common/autotest_common.sh@960 -- # wait 88895 00:17:20.662 10:04:19 -- target/tls.sh@37 -- # return 1 00:17:20.662 10:04:19 -- common/autotest_common.sh@653 -- # es=1 00:17:20.662 10:04:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.662 10:04:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.662 10:04:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.662 10:04:19 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:20.662 10:04:19 -- common/autotest_common.sh@650 -- # local es=0 00:17:20.662 10:04:19 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:20.662 10:04:19 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:20.662 10:04:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.662 10:04:19 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:20.662 10:04:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.662 10:04:19 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:20.662 10:04:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:20.662 10:04:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:20.662 10:04:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:20.662 10:04:19 -- target/tls.sh@23 -- # psk= 00:17:20.662 10:04:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.662 10:04:19 -- target/tls.sh@28 -- # bdevperf_pid=88946 00:17:20.662 10:04:19 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:20.662 10:04:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:20.662 10:04:19 -- target/tls.sh@31 -- # waitforlisten 88946 /var/tmp/bdevperf.sock 00:17:20.662 10:04:19 -- common/autotest_common.sh@829 -- # '[' -z 88946 ']' 00:17:20.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.662 10:04:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.662 10:04:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.662 10:04:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.662 10:04:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.662 10:04:19 -- common/autotest_common.sh@10 -- # set +x 00:17:20.921 [2024-12-16 10:04:19.306450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:20.921 [2024-12-16 10:04:19.306558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88946 ] 00:17:20.921 [2024-12-16 10:04:19.446517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.921 [2024-12-16 10:04:19.507254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.909 10:04:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.909 10:04:20 -- common/autotest_common.sh@862 -- # return 0 00:17:21.909 10:04:20 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:21.909 [2024-12-16 10:04:20.462892] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:21.909 [2024-12-16 10:04:20.464801] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e558c0 (9): Bad file descriptor 00:17:21.909 [2024-12-16 10:04:20.465796] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:21.909 [2024-12-16 10:04:20.465834] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:21.909 [2024-12-16 10:04:20.465844] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:21.909 2024/12/16 10:04:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:21.909 request: 00:17:21.909 { 00:17:21.909 "method": "bdev_nvme_attach_controller", 00:17:21.909 "params": { 00:17:21.909 "name": "TLSTEST", 00:17:21.909 "trtype": "tcp", 00:17:21.909 "traddr": "10.0.0.2", 00:17:21.909 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.909 "adrfam": "ipv4", 00:17:21.909 "trsvcid": "4420", 00:17:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:21.909 } 00:17:21.909 } 00:17:21.909 Got JSON-RPC error response 00:17:21.909 GoRPCClient: error on JSON-RPC call 00:17:21.909 10:04:20 -- target/tls.sh@36 -- # killprocess 88946 00:17:21.909 10:04:20 -- common/autotest_common.sh@936 -- # '[' -z 88946 ']' 00:17:21.909 10:04:20 -- common/autotest_common.sh@940 -- # kill -0 88946 00:17:21.909 10:04:20 -- common/autotest_common.sh@941 -- # uname 00:17:21.909 10:04:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:21.909 10:04:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88946 00:17:22.179 killing process with pid 88946 00:17:22.179 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.179 00:17:22.179 Latency(us) 00:17:22.179 [2024-12-16T10:04:20.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.179 [2024-12-16T10:04:20.804Z] =================================================================================================================== 00:17:22.179 [2024-12-16T10:04:20.804Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:22.179 10:04:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:22.179 10:04:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:22.179 10:04:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88946' 00:17:22.179 10:04:20 -- common/autotest_common.sh@955 -- # kill 88946 00:17:22.179 10:04:20 -- common/autotest_common.sh@960 -- # wait 88946 00:17:22.179 10:04:20 -- target/tls.sh@37 -- # return 1 00:17:22.179 10:04:20 -- common/autotest_common.sh@653 -- # es=1 00:17:22.179 10:04:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:22.179 10:04:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:22.179 10:04:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:22.179 10:04:20 -- target/tls.sh@167 -- # killprocess 88290 00:17:22.179 10:04:20 -- common/autotest_common.sh@936 -- # '[' -z 88290 ']' 00:17:22.179 10:04:20 -- common/autotest_common.sh@940 -- # kill -0 88290 00:17:22.179 10:04:20 -- common/autotest_common.sh@941 -- # uname 00:17:22.179 10:04:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:22.179 10:04:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88290 00:17:22.179 killing process with pid 88290 00:17:22.179 10:04:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:22.179 10:04:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:22.179 10:04:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88290' 00:17:22.179 10:04:20 -- common/autotest_common.sh@955 -- # kill 88290 00:17:22.179 10:04:20 -- common/autotest_common.sh@960 -- # wait 88290 00:17:22.438 10:04:20 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:22.438 10:04:20 -- target/tls.sh@49 -- # local key hash crc 00:17:22.438 10:04:20 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:22.438 10:04:20 -- target/tls.sh@51 -- # hash=02 00:17:22.438 10:04:20 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:22.438 10:04:20 -- target/tls.sh@52 -- # gzip -1 -c 00:17:22.438 10:04:20 -- target/tls.sh@52 -- # tail -c8 00:17:22.438 10:04:20 -- target/tls.sh@52 -- # head -c 4 00:17:22.438 10:04:20 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:22.438 10:04:20 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:22.438 10:04:20 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:22.438 10:04:20 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:22.438 10:04:20 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:22.438 10:04:20 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:22.438 10:04:20 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:22.438 10:04:20 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:22.438 10:04:20 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:22.438 10:04:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:22.438 10:04:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:22.438 10:04:20 -- common/autotest_common.sh@10 -- # set +x 00:17:22.438 10:04:20 -- nvmf/common.sh@469 -- # nvmfpid=89002 00:17:22.438 10:04:20 -- nvmf/common.sh@470 -- # waitforlisten 89002 00:17:22.438 10:04:20 -- common/autotest_common.sh@829 -- # '[' -z 89002 ']' 00:17:22.438 10:04:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:22.438 10:04:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.438 10:04:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.439 10:04:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.439 10:04:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.439 10:04:20 -- common/autotest_common.sh@10 -- # set +x 00:17:22.439 [2024-12-16 10:04:21.024719] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:22.439 [2024-12-16 10:04:21.024820] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.698 [2024-12-16 10:04:21.166382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.698 [2024-12-16 10:04:21.219447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:22.698 [2024-12-16 10:04:21.219582] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.698 [2024-12-16 10:04:21.219594] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.698 [2024-12-16 10:04:21.219601] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.698 [2024-12-16 10:04:21.219624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.635 10:04:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.635 10:04:21 -- common/autotest_common.sh@862 -- # return 0 00:17:23.635 10:04:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:23.635 10:04:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:23.635 10:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:23.635 10:04:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.635 10:04:21 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:23.635 10:04:21 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:23.635 10:04:21 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:23.894 [2024-12-16 10:04:22.263980] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.894 10:04:22 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:23.894 10:04:22 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:24.153 [2024-12-16 10:04:22.760092] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:24.153 [2024-12-16 10:04:22.760317] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.412 10:04:22 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:24.690 malloc0 00:17:24.690 10:04:23 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:24.949 10:04:23 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:24.949 10:04:23 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:24.949 10:04:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:24.949 10:04:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:24.949 10:04:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:24.949 10:04:23 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:24.949 10:04:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:24.949 10:04:23 -- target/tls.sh@28 -- # bdevperf_pid=89104 00:17:24.949 10:04:23 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:24.949 10:04:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:24.949 10:04:23 -- target/tls.sh@31 -- # waitforlisten 89104 /var/tmp/bdevperf.sock 00:17:24.949 10:04:23 -- common/autotest_common.sh@829 -- # '[' -z 89104 ']' 00:17:24.949 10:04:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:24.949 10:04:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.949 10:04:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:24.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:24.949 10:04:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.949 10:04:23 -- common/autotest_common.sh@10 -- # set +x 00:17:24.949 [2024-12-16 10:04:23.568674] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:24.949 [2024-12-16 10:04:23.568744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89104 ] 00:17:25.208 [2024-12-16 10:04:23.702102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.208 [2024-12-16 10:04:23.762229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.145 10:04:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.145 10:04:24 -- common/autotest_common.sh@862 -- # return 0 00:17:26.145 10:04:24 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.145 [2024-12-16 10:04:24.767336] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:26.404 TLSTESTn1 00:17:26.404 10:04:24 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:26.404 Running I/O for 10 seconds... 00:17:36.381 00:17:36.382 Latency(us) 00:17:36.382 [2024-12-16T10:04:35.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.382 [2024-12-16T10:04:35.007Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:36.382 Verification LBA range: start 0x0 length 0x2000 00:17:36.382 TLSTESTn1 : 10.02 6235.76 24.36 0.00 0.00 20492.42 5004.57 23235.49 00:17:36.382 [2024-12-16T10:04:35.007Z] =================================================================================================================== 00:17:36.382 [2024-12-16T10:04:35.007Z] Total : 6235.76 24.36 0.00 0.00 20492.42 5004.57 23235.49 00:17:36.382 0 00:17:36.382 10:04:34 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:36.382 10:04:34 -- target/tls.sh@45 -- # killprocess 89104 00:17:36.382 10:04:34 -- common/autotest_common.sh@936 -- # '[' -z 89104 ']' 00:17:36.382 10:04:34 -- common/autotest_common.sh@940 -- # kill -0 89104 00:17:36.382 10:04:34 -- common/autotest_common.sh@941 -- # uname 00:17:36.382 10:04:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:36.382 10:04:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89104 00:17:36.643 killing process with pid 89104 00:17:36.643 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.643 00:17:36.643 Latency(us) 00:17:36.643 [2024-12-16T10:04:35.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.643 [2024-12-16T10:04:35.268Z] =================================================================================================================== 00:17:36.643 [2024-12-16T10:04:35.268Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.643 10:04:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:36.643 10:04:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:36.643 10:04:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89104' 00:17:36.643 10:04:35 -- common/autotest_common.sh@955 -- # kill 89104 00:17:36.643 10:04:35 -- common/autotest_common.sh@960 -- # wait 89104 00:17:36.643 10:04:35 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:36.643 10:04:35 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:36.643 10:04:35 -- common/autotest_common.sh@650 -- # local es=0 00:17:36.643 10:04:35 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:36.643 10:04:35 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:36.643 10:04:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.643 10:04:35 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:36.643 10:04:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:36.643 10:04:35 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:36.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.643 10:04:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:36.643 10:04:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:36.643 10:04:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:36.643 10:04:35 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:36.643 10:04:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.643 10:04:35 -- target/tls.sh@28 -- # bdevperf_pid=89255 00:17:36.643 10:04:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:36.643 10:04:35 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:36.643 10:04:35 -- target/tls.sh@31 -- # waitforlisten 89255 /var/tmp/bdevperf.sock 00:17:36.643 10:04:35 -- common/autotest_common.sh@829 -- # '[' -z 89255 ']' 00:17:36.643 10:04:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.643 10:04:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.643 10:04:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.643 10:04:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.643 10:04:35 -- common/autotest_common.sh@10 -- # set +x 00:17:36.903 [2024-12-16 10:04:35.277227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:36.903 [2024-12-16 10:04:35.277985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89255 ] 00:17:36.903 [2024-12-16 10:04:35.410121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.903 [2024-12-16 10:04:35.466091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.839 10:04:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.839 10:04:36 -- common/autotest_common.sh@862 -- # return 0 00:17:37.839 10:04:36 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:38.098 [2024-12-16 10:04:36.543526] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.098 [2024-12-16 10:04:36.543963] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:38.098 2024/12/16 10:04:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:38.098 request: 00:17:38.098 { 00:17:38.098 "method": "bdev_nvme_attach_controller", 00:17:38.098 "params": { 00:17:38.098 "name": "TLSTEST", 00:17:38.098 "trtype": "tcp", 00:17:38.098 "traddr": "10.0.0.2", 00:17:38.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.098 "adrfam": "ipv4", 00:17:38.098 "trsvcid": "4420", 00:17:38.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.098 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:38.098 } 00:17:38.098 } 00:17:38.098 Got JSON-RPC error response 00:17:38.098 GoRPCClient: error on JSON-RPC call 00:17:38.098 10:04:36 -- target/tls.sh@36 -- # killprocess 89255 00:17:38.098 10:04:36 -- common/autotest_common.sh@936 -- # '[' -z 89255 ']' 00:17:38.098 10:04:36 -- common/autotest_common.sh@940 -- # kill -0 89255 00:17:38.098 10:04:36 -- common/autotest_common.sh@941 -- # uname 00:17:38.098 10:04:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.098 10:04:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89255 00:17:38.098 killing process with pid 89255 00:17:38.098 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.098 00:17:38.098 Latency(us) 00:17:38.098 [2024-12-16T10:04:36.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.098 [2024-12-16T10:04:36.723Z] =================================================================================================================== 00:17:38.098 [2024-12-16T10:04:36.723Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:38.098 10:04:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:38.098 10:04:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:38.098 10:04:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89255' 00:17:38.098 10:04:36 -- common/autotest_common.sh@955 -- # kill 89255 00:17:38.098 10:04:36 -- common/autotest_common.sh@960 -- # wait 89255 00:17:38.357 10:04:36 -- target/tls.sh@37 -- # return 1 00:17:38.357 10:04:36 -- common/autotest_common.sh@653 -- # es=1 00:17:38.357 10:04:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:38.357 10:04:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:38.357 10:04:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:38.357 10:04:36 -- target/tls.sh@183 -- # killprocess 89002 00:17:38.357 10:04:36 -- common/autotest_common.sh@936 -- # '[' -z 89002 ']' 00:17:38.357 10:04:36 -- common/autotest_common.sh@940 -- # kill -0 89002 00:17:38.357 10:04:36 -- common/autotest_common.sh@941 -- # uname 00:17:38.357 10:04:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.357 10:04:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89002 00:17:38.357 killing process with pid 89002 00:17:38.357 10:04:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:38.357 10:04:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:38.357 10:04:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89002' 00:17:38.357 10:04:36 -- common/autotest_common.sh@955 -- # kill 89002 00:17:38.357 10:04:36 -- common/autotest_common.sh@960 -- # wait 89002 00:17:38.616 10:04:37 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:38.616 10:04:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:38.616 10:04:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:38.616 10:04:37 -- common/autotest_common.sh@10 -- # set +x 00:17:38.616 10:04:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:38.616 10:04:37 -- nvmf/common.sh@469 -- # nvmfpid=89307 00:17:38.616 10:04:37 -- nvmf/common.sh@470 -- # waitforlisten 89307 00:17:38.616 10:04:37 -- common/autotest_common.sh@829 -- # '[' -z 89307 ']' 00:17:38.616 10:04:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.616 10:04:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.616 10:04:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.616 10:04:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.616 10:04:37 -- common/autotest_common.sh@10 -- # set +x 00:17:38.616 [2024-12-16 10:04:37.073563] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:38.616 [2024-12-16 10:04:37.074131] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.616 [2024-12-16 10:04:37.204045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.875 [2024-12-16 10:04:37.267081] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:38.875 [2024-12-16 10:04:37.267436] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.875 [2024-12-16 10:04:37.267578] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.875 [2024-12-16 10:04:37.267667] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.875 [2024-12-16 10:04:37.267748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.442 10:04:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.442 10:04:38 -- common/autotest_common.sh@862 -- # return 0 00:17:39.442 10:04:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:39.442 10:04:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:39.442 10:04:38 -- common/autotest_common.sh@10 -- # set +x 00:17:39.442 10:04:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.442 10:04:38 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.442 10:04:38 -- common/autotest_common.sh@650 -- # local es=0 00:17:39.442 10:04:38 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.442 10:04:38 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:39.442 10:04:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.443 10:04:38 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:39.443 10:04:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.443 10:04:38 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.443 10:04:38 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.443 10:04:38 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:39.702 [2024-12-16 10:04:38.319842] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.002 10:04:38 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:40.284 10:04:38 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:40.284 [2024-12-16 10:04:38.795963] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:40.284 [2024-12-16 10:04:38.796400] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.284 10:04:38 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:40.543 malloc0 00:17:40.543 10:04:39 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:40.802 10:04:39 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:41.060 [2024-12-16 10:04:39.527109] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:41.060 [2024-12-16 10:04:39.527559] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:41.060 [2024-12-16 10:04:39.527678] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:41.060 2024/12/16 10:04:39 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:41.060 request: 00:17:41.060 { 00:17:41.060 "method": "nvmf_subsystem_add_host", 00:17:41.060 "params": { 00:17:41.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.060 "host": "nqn.2016-06.io.spdk:host1", 00:17:41.060 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:41.060 } 00:17:41.060 } 00:17:41.060 Got JSON-RPC error response 00:17:41.060 GoRPCClient: error on JSON-RPC call 00:17:41.060 10:04:39 -- common/autotest_common.sh@653 -- # es=1 00:17:41.060 10:04:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:41.060 10:04:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:41.060 10:04:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:41.060 10:04:39 -- target/tls.sh@189 -- # killprocess 89307 00:17:41.060 10:04:39 -- common/autotest_common.sh@936 -- # '[' -z 89307 ']' 00:17:41.060 10:04:39 -- common/autotest_common.sh@940 -- # kill -0 89307 00:17:41.060 10:04:39 -- common/autotest_common.sh@941 -- # uname 00:17:41.060 10:04:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.060 10:04:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89307 00:17:41.060 10:04:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:41.060 10:04:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:41.060 killing process with pid 89307 00:17:41.060 10:04:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89307' 00:17:41.060 10:04:39 -- common/autotest_common.sh@955 -- # kill 89307 00:17:41.060 10:04:39 -- common/autotest_common.sh@960 -- # wait 89307 00:17:41.319 10:04:39 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:41.319 10:04:39 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:41.319 10:04:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:41.319 10:04:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:41.319 10:04:39 -- common/autotest_common.sh@10 -- # set +x 00:17:41.319 10:04:39 -- nvmf/common.sh@469 -- # nvmfpid=89418 00:17:41.319 10:04:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:41.319 10:04:39 -- nvmf/common.sh@470 -- # waitforlisten 89418 00:17:41.319 10:04:39 -- common/autotest_common.sh@829 -- # '[' -z 89418 ']' 00:17:41.319 10:04:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.319 10:04:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.319 10:04:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.319 10:04:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.319 10:04:39 -- common/autotest_common.sh@10 -- # set +x 00:17:41.319 [2024-12-16 10:04:39.830887] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:41.319 [2024-12-16 10:04:39.831514] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.577 [2024-12-16 10:04:39.959500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.577 [2024-12-16 10:04:40.030864] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:41.577 [2024-12-16 10:04:40.031029] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.577 [2024-12-16 10:04:40.031040] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.577 [2024-12-16 10:04:40.031048] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.577 [2024-12-16 10:04:40.031070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.513 10:04:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.513 10:04:40 -- common/autotest_common.sh@862 -- # return 0 00:17:42.513 10:04:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:42.513 10:04:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:42.513 10:04:40 -- common/autotest_common.sh@10 -- # set +x 00:17:42.513 10:04:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.513 10:04:40 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:42.513 10:04:40 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:42.513 10:04:40 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:42.513 [2024-12-16 10:04:41.067667] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.513 10:04:41 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:42.772 10:04:41 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:43.031 [2024-12-16 10:04:41.535797] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.031 [2024-12-16 10:04:41.536176] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.031 10:04:41 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:43.289 malloc0 00:17:43.289 10:04:41 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:43.548 10:04:41 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:43.807 10:04:42 -- target/tls.sh@197 -- # bdevperf_pid=89522 00:17:43.807 10:04:42 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:43.807 10:04:42 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.807 10:04:42 -- target/tls.sh@200 -- # waitforlisten 89522 /var/tmp/bdevperf.sock 00:17:43.807 10:04:42 -- common/autotest_common.sh@829 -- # '[' -z 89522 ']' 00:17:43.807 10:04:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.807 10:04:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.807 10:04:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.807 10:04:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.807 10:04:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.807 [2024-12-16 10:04:42.214254] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:43.807 [2024-12-16 10:04:42.214921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89522 ] 00:17:43.807 [2024-12-16 10:04:42.351812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.807 [2024-12-16 10:04:42.420449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.743 10:04:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.743 10:04:43 -- common/autotest_common.sh@862 -- # return 0 00:17:44.744 10:04:43 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:44.744 [2024-12-16 10:04:43.354780] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:45.002 TLSTESTn1 00:17:45.002 10:04:43 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:45.261 10:04:43 -- target/tls.sh@205 -- # tgtconf='{ 00:17:45.261 "subsystems": [ 00:17:45.261 { 00:17:45.261 "subsystem": "iobuf", 00:17:45.261 "config": [ 00:17:45.261 { 00:17:45.261 "method": "iobuf_set_options", 00:17:45.261 "params": { 00:17:45.261 "large_bufsize": 135168, 00:17:45.261 "large_pool_count": 1024, 00:17:45.261 "small_bufsize": 8192, 00:17:45.261 "small_pool_count": 8192 00:17:45.261 } 00:17:45.261 } 00:17:45.261 ] 00:17:45.261 }, 00:17:45.261 { 00:17:45.261 "subsystem": "sock", 00:17:45.261 "config": [ 00:17:45.261 { 00:17:45.261 "method": "sock_impl_set_options", 00:17:45.261 "params": { 00:17:45.261 "enable_ktls": false, 00:17:45.261 "enable_placement_id": 0, 00:17:45.261 "enable_quickack": false, 00:17:45.261 "enable_recv_pipe": true, 00:17:45.261 "enable_zerocopy_send_client": false, 00:17:45.261 "enable_zerocopy_send_server": true, 00:17:45.261 "impl_name": "posix", 00:17:45.261 "recv_buf_size": 2097152, 00:17:45.261 "send_buf_size": 2097152, 00:17:45.261 "tls_version": 0, 00:17:45.261 "zerocopy_threshold": 0 00:17:45.261 } 00:17:45.261 }, 00:17:45.261 { 00:17:45.261 "method": "sock_impl_set_options", 00:17:45.261 "params": { 00:17:45.261 "enable_ktls": false, 00:17:45.261 "enable_placement_id": 0, 00:17:45.261 "enable_quickack": false, 00:17:45.261 "enable_recv_pipe": true, 00:17:45.261 "enable_zerocopy_send_client": false, 00:17:45.262 "enable_zerocopy_send_server": true, 00:17:45.262 "impl_name": "ssl", 00:17:45.262 "recv_buf_size": 4096, 00:17:45.262 "send_buf_size": 4096, 00:17:45.262 "tls_version": 0, 00:17:45.262 "zerocopy_threshold": 0 00:17:45.262 } 00:17:45.262 } 00:17:45.262 ] 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "subsystem": "vmd", 00:17:45.262 "config": [] 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "subsystem": "accel", 00:17:45.262 "config": [ 00:17:45.262 { 00:17:45.262 "method": "accel_set_options", 00:17:45.262 "params": { 00:17:45.262 "buf_count": 2048, 00:17:45.262 "large_cache_size": 16, 00:17:45.262 "sequence_count": 2048, 00:17:45.262 "small_cache_size": 128, 00:17:45.262 "task_count": 2048 00:17:45.262 } 00:17:45.262 } 00:17:45.262 ] 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "subsystem": "bdev", 00:17:45.262 "config": [ 00:17:45.262 { 00:17:45.262 "method": "bdev_set_options", 00:17:45.262 "params": { 00:17:45.262 "bdev_auto_examine": true, 00:17:45.262 "bdev_io_cache_size": 256, 00:17:45.262 "bdev_io_pool_size": 65535, 00:17:45.262 "iobuf_large_cache_size": 16, 00:17:45.262 "iobuf_small_cache_size": 128 00:17:45.262 } 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "method": "bdev_raid_set_options", 00:17:45.262 "params": { 00:17:45.262 "process_window_size_kb": 1024 00:17:45.262 } 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "method": "bdev_iscsi_set_options", 00:17:45.262 "params": { 00:17:45.262 "timeout_sec": 30 00:17:45.262 } 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "method": "bdev_nvme_set_options", 00:17:45.262 "params": { 00:17:45.262 "action_on_timeout": "none", 00:17:45.262 "allow_accel_sequence": false, 00:17:45.262 "arbitration_burst": 0, 00:17:45.262 "bdev_retry_count": 3, 00:17:45.262 "ctrlr_loss_timeout_sec": 0, 00:17:45.262 "delay_cmd_submit": true, 00:17:45.262 "fast_io_fail_timeout_sec": 0, 00:17:45.262 "generate_uuids": false, 00:17:45.262 "high_priority_weight": 0, 00:17:45.262 "io_path_stat": false, 00:17:45.262 "io_queue_requests": 0, 00:17:45.262 "keep_alive_timeout_ms": 10000, 00:17:45.262 "low_priority_weight": 0, 00:17:45.262 "medium_priority_weight": 0, 00:17:45.262 "nvme_adminq_poll_period_us": 10000, 00:17:45.262 "nvme_ioq_poll_period_us": 0, 00:17:45.262 "reconnect_delay_sec": 0, 00:17:45.262 "timeout_admin_us": 0, 00:17:45.262 "timeout_us": 0, 00:17:45.262 "transport_ack_timeout": 0, 00:17:45.262 "transport_retry_count": 4, 00:17:45.262 "transport_tos": 0 00:17:45.262 } 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "method": "bdev_nvme_set_hotplug", 00:17:45.262 "params": { 00:17:45.262 "enable": false, 00:17:45.262 "period_us": 100000 00:17:45.262 } 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "method": "bdev_malloc_create", 00:17:45.262 "params": { 00:17:45.262 "block_size": 4096, 00:17:45.262 "name": "malloc0", 00:17:45.262 "num_blocks": 8192, 00:17:45.262 "optimal_io_boundary": 0, 00:17:45.262 "physical_block_size": 4096, 00:17:45.262 "uuid": "05592178-8fc9-4add-9bba-4e776fed3d57" 00:17:45.262 } 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "method": "bdev_wait_for_examine" 00:17:45.262 } 00:17:45.262 ] 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "subsystem": "nbd", 00:17:45.262 "config": [] 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "subsystem": "scheduler", 00:17:45.262 "config": [ 00:17:45.262 { 00:17:45.262 "method": "framework_set_scheduler", 00:17:45.262 "params": { 00:17:45.262 "name": "static" 00:17:45.262 } 00:17:45.262 } 00:17:45.262 ] 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "subsystem": "nvmf", 00:17:45.262 "config": [ 00:17:45.262 { 00:17:45.262 "method": "nvmf_set_config", 00:17:45.262 "params": { 00:17:45.262 "admin_cmd_passthru": { 00:17:45.262 "identify_ctrlr": false 00:17:45.262 }, 00:17:45.262 "discovery_filter": "match_any" 00:17:45.262 } 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "method": "nvmf_set_max_subsystems", 00:17:45.262 "params": { 00:17:45.262 "max_subsystems": 1024 00:17:45.262 } 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "method": "nvmf_set_crdt", 00:17:45.262 "params": { 00:17:45.262 "crdt1": 0, 00:17:45.262 "crdt2": 0, 00:17:45.262 "crdt3": 0 00:17:45.262 } 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "method": "nvmf_create_transport", 00:17:45.262 "params": { 00:17:45.262 "abort_timeout_sec": 1, 00:17:45.262 "buf_cache_size": 4294967295, 00:17:45.262 "c2h_success": false, 00:17:45.262 "dif_insert_or_strip": false, 00:17:45.262 "in_capsule_data_size": 4096, 00:17:45.262 "io_unit_size": 131072, 00:17:45.262 "max_aq_depth": 128, 00:17:45.262 "max_io_qpairs_per_ctrlr": 127, 00:17:45.262 "max_io_size": 131072, 00:17:45.262 "max_queue_depth": 128, 00:17:45.262 "num_shared_buffers": 511, 00:17:45.262 "sock_priority": 0, 00:17:45.262 "trtype": "TCP", 00:17:45.262 "zcopy": false 00:17:45.262 } 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "method": "nvmf_create_subsystem", 00:17:45.262 "params": { 00:17:45.262 "allow_any_host": false, 00:17:45.262 "ana_reporting": false, 00:17:45.262 "max_cntlid": 65519, 00:17:45.262 "max_namespaces": 10, 00:17:45.262 "min_cntlid": 1, 00:17:45.262 "model_number": "SPDK bdev Controller", 00:17:45.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.262 "serial_number": "SPDK00000000000001" 00:17:45.262 } 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "method": "nvmf_subsystem_add_host", 00:17:45.262 "params": { 00:17:45.262 "host": "nqn.2016-06.io.spdk:host1", 00:17:45.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.262 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:45.262 } 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "method": "nvmf_subsystem_add_ns", 00:17:45.262 "params": { 00:17:45.262 "namespace": { 00:17:45.262 "bdev_name": "malloc0", 00:17:45.262 "nguid": "055921788FC94ADD9BBA4E776FED3D57", 00:17:45.262 "nsid": 1, 00:17:45.262 "uuid": "05592178-8fc9-4add-9bba-4e776fed3d57" 00:17:45.262 }, 00:17:45.262 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:45.262 } 00:17:45.262 }, 00:17:45.262 { 00:17:45.262 "method": "nvmf_subsystem_add_listener", 00:17:45.262 "params": { 00:17:45.262 "listen_address": { 00:17:45.262 "adrfam": "IPv4", 00:17:45.262 "traddr": "10.0.0.2", 00:17:45.262 "trsvcid": "4420", 00:17:45.262 "trtype": "TCP" 00:17:45.262 }, 00:17:45.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.262 "secure_channel": true 00:17:45.262 } 00:17:45.262 } 00:17:45.262 ] 00:17:45.262 } 00:17:45.262 ] 00:17:45.262 }' 00:17:45.262 10:04:43 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:45.522 10:04:44 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:45.522 "subsystems": [ 00:17:45.522 { 00:17:45.522 "subsystem": "iobuf", 00:17:45.522 "config": [ 00:17:45.522 { 00:17:45.522 "method": "iobuf_set_options", 00:17:45.522 "params": { 00:17:45.522 "large_bufsize": 135168, 00:17:45.522 "large_pool_count": 1024, 00:17:45.522 "small_bufsize": 8192, 00:17:45.522 "small_pool_count": 8192 00:17:45.522 } 00:17:45.522 } 00:17:45.522 ] 00:17:45.522 }, 00:17:45.522 { 00:17:45.522 "subsystem": "sock", 00:17:45.522 "config": [ 00:17:45.522 { 00:17:45.522 "method": "sock_impl_set_options", 00:17:45.522 "params": { 00:17:45.522 "enable_ktls": false, 00:17:45.522 "enable_placement_id": 0, 00:17:45.522 "enable_quickack": false, 00:17:45.522 "enable_recv_pipe": true, 00:17:45.522 "enable_zerocopy_send_client": false, 00:17:45.522 "enable_zerocopy_send_server": true, 00:17:45.522 "impl_name": "posix", 00:17:45.522 "recv_buf_size": 2097152, 00:17:45.522 "send_buf_size": 2097152, 00:17:45.522 "tls_version": 0, 00:17:45.522 "zerocopy_threshold": 0 00:17:45.522 } 00:17:45.522 }, 00:17:45.522 { 00:17:45.522 "method": "sock_impl_set_options", 00:17:45.522 "params": { 00:17:45.522 "enable_ktls": false, 00:17:45.522 "enable_placement_id": 0, 00:17:45.522 "enable_quickack": false, 00:17:45.522 "enable_recv_pipe": true, 00:17:45.522 "enable_zerocopy_send_client": false, 00:17:45.522 "enable_zerocopy_send_server": true, 00:17:45.522 "impl_name": "ssl", 00:17:45.522 "recv_buf_size": 4096, 00:17:45.522 "send_buf_size": 4096, 00:17:45.522 "tls_version": 0, 00:17:45.522 "zerocopy_threshold": 0 00:17:45.522 } 00:17:45.522 } 00:17:45.522 ] 00:17:45.522 }, 00:17:45.522 { 00:17:45.522 "subsystem": "vmd", 00:17:45.522 "config": [] 00:17:45.522 }, 00:17:45.522 { 00:17:45.522 "subsystem": "accel", 00:17:45.522 "config": [ 00:17:45.522 { 00:17:45.522 "method": "accel_set_options", 00:17:45.522 "params": { 00:17:45.522 "buf_count": 2048, 00:17:45.522 "large_cache_size": 16, 00:17:45.522 "sequence_count": 2048, 00:17:45.522 "small_cache_size": 128, 00:17:45.522 "task_count": 2048 00:17:45.522 } 00:17:45.522 } 00:17:45.522 ] 00:17:45.522 }, 00:17:45.522 { 00:17:45.522 "subsystem": "bdev", 00:17:45.522 "config": [ 00:17:45.522 { 00:17:45.522 "method": "bdev_set_options", 00:17:45.522 "params": { 00:17:45.522 "bdev_auto_examine": true, 00:17:45.522 "bdev_io_cache_size": 256, 00:17:45.522 "bdev_io_pool_size": 65535, 00:17:45.522 "iobuf_large_cache_size": 16, 00:17:45.522 "iobuf_small_cache_size": 128 00:17:45.522 } 00:17:45.522 }, 00:17:45.522 { 00:17:45.522 "method": "bdev_raid_set_options", 00:17:45.522 "params": { 00:17:45.522 "process_window_size_kb": 1024 00:17:45.522 } 00:17:45.522 }, 00:17:45.522 { 00:17:45.522 "method": "bdev_iscsi_set_options", 00:17:45.522 "params": { 00:17:45.522 "timeout_sec": 30 00:17:45.522 } 00:17:45.522 }, 00:17:45.522 { 00:17:45.522 "method": "bdev_nvme_set_options", 00:17:45.522 "params": { 00:17:45.522 "action_on_timeout": "none", 00:17:45.522 "allow_accel_sequence": false, 00:17:45.522 "arbitration_burst": 0, 00:17:45.522 "bdev_retry_count": 3, 00:17:45.522 "ctrlr_loss_timeout_sec": 0, 00:17:45.522 "delay_cmd_submit": true, 00:17:45.522 "fast_io_fail_timeout_sec": 0, 00:17:45.522 "generate_uuids": false, 00:17:45.522 "high_priority_weight": 0, 00:17:45.522 "io_path_stat": false, 00:17:45.522 "io_queue_requests": 512, 00:17:45.522 "keep_alive_timeout_ms": 10000, 00:17:45.522 "low_priority_weight": 0, 00:17:45.522 "medium_priority_weight": 0, 00:17:45.522 "nvme_adminq_poll_period_us": 10000, 00:17:45.522 "nvme_ioq_poll_period_us": 0, 00:17:45.522 "reconnect_delay_sec": 0, 00:17:45.522 "timeout_admin_us": 0, 00:17:45.522 "timeout_us": 0, 00:17:45.522 "transport_ack_timeout": 0, 00:17:45.522 "transport_retry_count": 4, 00:17:45.522 "transport_tos": 0 00:17:45.522 } 00:17:45.522 }, 00:17:45.522 { 00:17:45.522 "method": "bdev_nvme_attach_controller", 00:17:45.522 "params": { 00:17:45.522 "adrfam": "IPv4", 00:17:45.522 "ctrlr_loss_timeout_sec": 0, 00:17:45.522 "ddgst": false, 00:17:45.522 "fast_io_fail_timeout_sec": 0, 00:17:45.522 "hdgst": false, 00:17:45.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:45.522 "name": "TLSTEST", 00:17:45.522 "prchk_guard": false, 00:17:45.522 "prchk_reftag": false, 00:17:45.522 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:45.522 "reconnect_delay_sec": 0, 00:17:45.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:45.522 "traddr": "10.0.0.2", 00:17:45.522 "trsvcid": "4420", 00:17:45.522 "trtype": "TCP" 00:17:45.522 } 00:17:45.522 }, 00:17:45.522 { 00:17:45.522 "method": "bdev_nvme_set_hotplug", 00:17:45.522 "params": { 00:17:45.522 "enable": false, 00:17:45.522 "period_us": 100000 00:17:45.522 } 00:17:45.522 }, 00:17:45.522 { 00:17:45.522 "method": "bdev_wait_for_examine" 00:17:45.522 } 00:17:45.522 ] 00:17:45.522 }, 00:17:45.522 { 00:17:45.522 "subsystem": "nbd", 00:17:45.522 "config": [] 00:17:45.522 } 00:17:45.522 ] 00:17:45.522 }' 00:17:45.522 10:04:44 -- target/tls.sh@208 -- # killprocess 89522 00:17:45.522 10:04:44 -- common/autotest_common.sh@936 -- # '[' -z 89522 ']' 00:17:45.522 10:04:44 -- common/autotest_common.sh@940 -- # kill -0 89522 00:17:45.522 10:04:44 -- common/autotest_common.sh@941 -- # uname 00:17:45.522 10:04:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.522 10:04:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89522 00:17:45.522 10:04:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:45.522 10:04:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:45.522 killing process with pid 89522 00:17:45.522 10:04:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89522' 00:17:45.522 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.522 00:17:45.522 Latency(us) 00:17:45.522 [2024-12-16T10:04:44.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.522 [2024-12-16T10:04:44.147Z] =================================================================================================================== 00:17:45.522 [2024-12-16T10:04:44.147Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:45.522 10:04:44 -- common/autotest_common.sh@955 -- # kill 89522 00:17:45.522 10:04:44 -- common/autotest_common.sh@960 -- # wait 89522 00:17:45.781 10:04:44 -- target/tls.sh@209 -- # killprocess 89418 00:17:45.781 10:04:44 -- common/autotest_common.sh@936 -- # '[' -z 89418 ']' 00:17:45.781 10:04:44 -- common/autotest_common.sh@940 -- # kill -0 89418 00:17:45.781 10:04:44 -- common/autotest_common.sh@941 -- # uname 00:17:45.781 10:04:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.781 10:04:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89418 00:17:45.781 killing process with pid 89418 00:17:45.781 10:04:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:45.781 10:04:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:45.781 10:04:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89418' 00:17:45.781 10:04:44 -- common/autotest_common.sh@955 -- # kill 89418 00:17:45.781 10:04:44 -- common/autotest_common.sh@960 -- # wait 89418 00:17:46.041 10:04:44 -- target/tls.sh@212 -- # echo '{ 00:17:46.041 "subsystems": [ 00:17:46.041 { 00:17:46.041 "subsystem": "iobuf", 00:17:46.041 "config": [ 00:17:46.041 { 00:17:46.041 "method": "iobuf_set_options", 00:17:46.041 "params": { 00:17:46.041 "large_bufsize": 135168, 00:17:46.041 "large_pool_count": 1024, 00:17:46.041 "small_bufsize": 8192, 00:17:46.041 "small_pool_count": 8192 00:17:46.041 } 00:17:46.041 } 00:17:46.041 ] 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "subsystem": "sock", 00:17:46.041 "config": [ 00:17:46.041 { 00:17:46.041 "method": "sock_impl_set_options", 00:17:46.041 "params": { 00:17:46.041 "enable_ktls": false, 00:17:46.041 "enable_placement_id": 0, 00:17:46.041 "enable_quickack": false, 00:17:46.041 "enable_recv_pipe": true, 00:17:46.041 "enable_zerocopy_send_client": false, 00:17:46.041 "enable_zerocopy_send_server": true, 00:17:46.041 "impl_name": "posix", 00:17:46.041 "recv_buf_size": 2097152, 00:17:46.041 "send_buf_size": 2097152, 00:17:46.041 "tls_version": 0, 00:17:46.041 "zerocopy_threshold": 0 00:17:46.041 } 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "method": "sock_impl_set_options", 00:17:46.041 "params": { 00:17:46.041 "enable_ktls": false, 00:17:46.041 "enable_placement_id": 0, 00:17:46.041 "enable_quickack": false, 00:17:46.041 "enable_recv_pipe": true, 00:17:46.041 "enable_zerocopy_send_client": false, 00:17:46.041 "enable_zerocopy_send_server": true, 00:17:46.041 "impl_name": "ssl", 00:17:46.041 "recv_buf_size": 4096, 00:17:46.041 "send_buf_size": 4096, 00:17:46.041 "tls_version": 0, 00:17:46.041 "zerocopy_threshold": 0 00:17:46.041 } 00:17:46.041 } 00:17:46.041 ] 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "subsystem": "vmd", 00:17:46.041 "config": [] 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "subsystem": "accel", 00:17:46.041 "config": [ 00:17:46.041 { 00:17:46.041 "method": "accel_set_options", 00:17:46.041 "params": { 00:17:46.041 "buf_count": 2048, 00:17:46.041 "large_cache_size": 16, 00:17:46.041 "sequence_count": 2048, 00:17:46.041 "small_cache_size": 128, 00:17:46.041 "task_count": 2048 00:17:46.041 } 00:17:46.041 } 00:17:46.041 ] 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "subsystem": "bdev", 00:17:46.041 "config": [ 00:17:46.041 { 00:17:46.041 "method": "bdev_set_options", 00:17:46.041 "params": { 00:17:46.041 "bdev_auto_examine": true, 00:17:46.041 "bdev_io_cache_size": 256, 00:17:46.041 "bdev_io_pool_size": 65535, 00:17:46.041 "iobuf_large_cache_size": 16, 00:17:46.041 "iobuf_small_cache_size": 128 00:17:46.041 } 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "method": "bdev_raid_set_options", 00:17:46.041 "params": { 00:17:46.041 "process_window_size_kb": 1024 00:17:46.041 } 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "method": "bdev_iscsi_set_options", 00:17:46.041 "params": { 00:17:46.041 "timeout_sec": 30 00:17:46.041 } 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "method": "bdev_nvme_set_options", 00:17:46.041 "params": { 00:17:46.041 "action_on_timeout": "none", 00:17:46.041 "allow_accel_sequence": false, 00:17:46.041 "arbitration_burst": 0, 00:17:46.041 "bdev_retry_count": 3, 00:17:46.041 "ctrlr_loss_timeout_sec": 0, 00:17:46.041 "delay_cmd_submit": true, 00:17:46.041 "fast_io_fail_timeout_sec": 0, 00:17:46.041 "generate_uuids": false, 00:17:46.041 "high_priority_weight": 0, 00:17:46.041 "io_path_stat": false, 00:17:46.041 "io_queue_requests": 0, 00:17:46.041 "keep_alive_timeout_ms": 10000, 00:17:46.041 "low_priority_weight": 0, 00:17:46.041 "medium_priority_weight": 0, 00:17:46.041 "nvme_adminq_poll_period_us": 10000, 00:17:46.041 "nvme_ioq_poll_period_us": 0, 00:17:46.041 "reconnect_delay_sec": 0, 00:17:46.041 "timeout_admin_us": 0, 00:17:46.041 "timeout_us": 0, 00:17:46.041 "transport_ack_timeout": 0, 00:17:46.041 "transport_retry_count": 4, 00:17:46.041 "transport_tos": 0 00:17:46.041 } 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "method": "bdev_nvme_set_hotplug", 00:17:46.041 "params": { 00:17:46.041 "enable": false, 00:17:46.041 "period_us": 100000 00:17:46.041 } 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "method": "bdev_malloc_create", 00:17:46.041 "params": { 00:17:46.041 "block_size": 4096, 00:17:46.041 "name": "malloc0", 00:17:46.041 "num_blocks": 8192, 00:17:46.041 "optimal_io_boundary": 0, 00:17:46.041 "physical_block_size": 4096, 00:17:46.041 "uuid": "05592178-8fc9-4add-9bba-4e776fed3d57" 00:17:46.041 } 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "method": "bdev_wait_for_examine" 00:17:46.041 } 00:17:46.041 ] 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "subsystem": "nbd", 00:17:46.041 "config": [] 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "subsystem": "sch 10:04:44 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:46.041 eduler", 00:17:46.041 "config": [ 00:17:46.041 { 00:17:46.041 "method": "framework_set_scheduler", 00:17:46.041 "params": { 00:17:46.041 "name": "static" 00:17:46.041 } 00:17:46.041 } 00:17:46.041 ] 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "subsystem": "nvmf", 00:17:46.041 "config": [ 00:17:46.041 { 00:17:46.041 "method": "nvmf_set_config", 00:17:46.041 "params": { 00:17:46.041 "admin_cmd_passthru": { 00:17:46.041 "identify_ctrlr": false 00:17:46.041 }, 00:17:46.041 "discovery_filter": "match_any" 00:17:46.041 } 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "method": "nvmf_set_max_subsystems", 00:17:46.041 "params": { 00:17:46.041 "max_subsystems": 1024 00:17:46.041 } 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "method": "nvmf_set_crdt", 00:17:46.041 "params": { 00:17:46.041 "crdt1": 0, 00:17:46.041 "crdt2": 0, 00:17:46.041 "crdt3": 0 00:17:46.041 } 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "method": "nvmf_create_transport", 00:17:46.041 "params": { 00:17:46.041 "abort_timeout_sec": 1, 00:17:46.041 "buf_cache_size": 4294967295, 00:17:46.041 "c2h_success": false, 00:17:46.041 "dif_insert_or_strip": false, 00:17:46.041 "in_capsule_data_size": 4096, 00:17:46.041 "io_unit_size": 131072, 00:17:46.041 "max_aq_depth": 128, 00:17:46.041 "max_io_qpairs_per_ctrlr": 127, 00:17:46.041 "max_io_size": 131072, 00:17:46.041 "max_queue_depth": 128, 00:17:46.041 "num_shared_buffers": 511, 00:17:46.041 "sock_priority": 0, 00:17:46.041 "trtype": "TCP", 00:17:46.041 "zcopy": false 00:17:46.041 } 00:17:46.041 }, 00:17:46.041 { 00:17:46.041 "method": "nvmf_create_subsystem", 00:17:46.041 "params": { 00:17:46.041 "allow_any_host": false, 00:17:46.042 "ana_reporting": false, 00:17:46.042 "max_cntlid": 65519, 00:17:46.042 "max_namespaces": 10, 00:17:46.042 "min_cntlid": 1, 00:17:46.042 "model_number": "SPDK bdev Controller", 00:17:46.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.042 "serial_number": "SPDK00000000000001" 00:17:46.042 } 00:17:46.042 }, 00:17:46.042 { 00:17:46.042 "method": "nvmf_subsystem_add_host", 00:17:46.042 "params": { 00:17:46.042 "host": "nqn.2016-06.io.spdk:host1", 00:17:46.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.042 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:46.042 } 00:17:46.042 }, 00:17:46.042 { 00:17:46.042 "method": "nvmf_subsystem_add_ns", 00:17:46.042 "params": { 00:17:46.042 "namespace": { 00:17:46.042 "bdev_name": "malloc0", 00:17:46.042 "nguid": "055921788FC94ADD9BBA4E776FED3D57", 00:17:46.042 "nsid": 1, 00:17:46.042 "uuid": "05592178-8fc9-4add-9bba-4e776fed3d57" 00:17:46.042 }, 00:17:46.042 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:46.042 } 00:17:46.042 }, 00:17:46.042 { 00:17:46.042 "method": "nvmf_subsystem_add_listener", 00:17:46.042 "params": { 00:17:46.042 "listen_address": { 00:17:46.042 "adrfam": "IPv4", 00:17:46.042 "traddr": "10.0.0.2", 00:17:46.042 "trsvcid": "4420", 00:17:46.042 "trtype": "TCP" 00:17:46.042 }, 00:17:46.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.042 "secure_channel": true 00:17:46.042 } 00:17:46.042 } 00:17:46.042 ] 00:17:46.042 } 00:17:46.042 ] 00:17:46.042 }' 00:17:46.042 10:04:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:46.042 10:04:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:46.042 10:04:44 -- common/autotest_common.sh@10 -- # set +x 00:17:46.042 10:04:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:46.042 10:04:44 -- nvmf/common.sh@469 -- # nvmfpid=89601 00:17:46.042 10:04:44 -- nvmf/common.sh@470 -- # waitforlisten 89601 00:17:46.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.042 10:04:44 -- common/autotest_common.sh@829 -- # '[' -z 89601 ']' 00:17:46.042 10:04:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.042 10:04:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.042 10:04:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.042 10:04:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.042 10:04:44 -- common/autotest_common.sh@10 -- # set +x 00:17:46.042 [2024-12-16 10:04:44.594279] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:46.042 [2024-12-16 10:04:44.594624] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.302 [2024-12-16 10:04:44.734478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.302 [2024-12-16 10:04:44.792048] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:46.302 [2024-12-16 10:04:44.792410] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.302 [2024-12-16 10:04:44.792545] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.302 [2024-12-16 10:04:44.792610] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.302 [2024-12-16 10:04:44.792690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.560 [2024-12-16 10:04:45.002970] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.560 [2024-12-16 10:04:45.034929] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:46.560 [2024-12-16 10:04:45.035141] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.126 10:04:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.126 10:04:45 -- common/autotest_common.sh@862 -- # return 0 00:17:47.126 10:04:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:47.126 10:04:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:47.126 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:17:47.126 10:04:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.126 10:04:45 -- target/tls.sh@216 -- # bdevperf_pid=89641 00:17:47.126 10:04:45 -- target/tls.sh@217 -- # waitforlisten 89641 /var/tmp/bdevperf.sock 00:17:47.126 10:04:45 -- common/autotest_common.sh@829 -- # '[' -z 89641 ']' 00:17:47.126 10:04:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.126 10:04:45 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:47.126 10:04:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.126 10:04:45 -- target/tls.sh@213 -- # echo '{ 00:17:47.126 "subsystems": [ 00:17:47.126 { 00:17:47.126 "subsystem": "iobuf", 00:17:47.126 "config": [ 00:17:47.126 { 00:17:47.126 "method": "iobuf_set_options", 00:17:47.126 "params": { 00:17:47.126 "large_bufsize": 135168, 00:17:47.127 "large_pool_count": 1024, 00:17:47.127 "small_bufsize": 8192, 00:17:47.127 "small_pool_count": 8192 00:17:47.127 } 00:17:47.127 } 00:17:47.127 ] 00:17:47.127 }, 00:17:47.127 { 00:17:47.127 "subsystem": "sock", 00:17:47.127 "config": [ 00:17:47.127 { 00:17:47.127 "method": "sock_impl_set_options", 00:17:47.127 "params": { 00:17:47.127 "enable_ktls": false, 00:17:47.127 "enable_placement_id": 0, 00:17:47.127 "enable_quickack": false, 00:17:47.127 "enable_recv_pipe": true, 00:17:47.127 "enable_zerocopy_send_client": false, 00:17:47.127 "enable_zerocopy_send_server": true, 00:17:47.127 "impl_name": "posix", 00:17:47.127 "recv_buf_size": 2097152, 00:17:47.127 "send_buf_size": 2097152, 00:17:47.127 "tls_version": 0, 00:17:47.127 "zerocopy_threshold": 0 00:17:47.127 } 00:17:47.127 }, 00:17:47.127 { 00:17:47.127 "method": "sock_impl_set_options", 00:17:47.127 "params": { 00:17:47.127 "enable_ktls": false, 00:17:47.127 "enable_placement_id": 0, 00:17:47.127 "enable_quickack": false, 00:17:47.127 "enable_recv_pipe": true, 00:17:47.127 "enable_zerocopy_send_client": false, 00:17:47.127 "enable_zerocopy_send_server": true, 00:17:47.127 "impl_name": "ssl", 00:17:47.127 "recv_buf_size": 4096, 00:17:47.127 "send_buf_size": 4096, 00:17:47.127 "tls_version": 0, 00:17:47.127 "zerocopy_threshold": 0 00:17:47.127 } 00:17:47.127 } 00:17:47.127 ] 00:17:47.127 }, 00:17:47.127 { 00:17:47.127 "subsystem": "vmd", 00:17:47.127 "config": [] 00:17:47.127 }, 00:17:47.127 { 00:17:47.127 "subsystem": "accel", 00:17:47.127 "config": [ 00:17:47.127 { 00:17:47.127 "method": "accel_set_options", 00:17:47.127 "params": { 00:17:47.127 "buf_count": 2048, 00:17:47.127 "large_cache_size": 16, 00:17:47.127 "sequence_count": 2048, 00:17:47.127 "small_cache_size": 128, 00:17:47.127 "task_count": 2048 00:17:47.127 } 00:17:47.127 } 00:17:47.127 ] 00:17:47.127 }, 00:17:47.127 { 00:17:47.127 "subsystem": "bdev", 00:17:47.127 "config": [ 00:17:47.127 { 00:17:47.127 "method": "bdev_set_options", 00:17:47.127 "params": { 00:17:47.127 "bdev_auto_examine": true, 00:17:47.127 "bdev_io_cache_size": 256, 00:17:47.127 "bdev_io_pool_size": 65535, 00:17:47.127 "iobuf_large_cache_size": 16, 00:17:47.127 "iobuf_small_cache_size": 128 00:17:47.127 } 00:17:47.127 }, 00:17:47.127 { 00:17:47.127 "method": "bdev_raid_set_options", 00:17:47.127 "params": { 00:17:47.127 "process_window_size_kb": 1024 00:17:47.127 } 00:17:47.127 }, 00:17:47.127 { 00:17:47.127 "method": "bdev_iscsi_set_options", 00:17:47.127 "params": { 00:17:47.127 "timeout_sec": 30 00:17:47.127 } 00:17:47.127 }, 00:17:47.127 { 00:17:47.127 "method": "bdev_nvme_set_options", 00:17:47.127 "params": { 00:17:47.127 "action_on_timeout": "none", 00:17:47.127 "allow_accel_sequence": false, 00:17:47.127 "arbitration_burst": 0, 00:17:47.127 "bdev_retry_count": 3, 00:17:47.127 "ctrlr_loss_timeout_sec": 0, 00:17:47.127 "delay_cmd_submit": true, 00:17:47.127 "fast_io_fail_timeout_sec": 0, 00:17:47.127 "generate_uuids": false, 00:17:47.127 "high_priority_weight": 0, 00:17:47.127 "io_path_stat": false, 00:17:47.127 "io_queue_requests": 512, 00:17:47.127 "keep_alive_timeout_ms": 10000, 00:17:47.127 "low_priority_weight": 0, 00:17:47.127 "medium_priority_weight": 0, 00:17:47.127 "nvme_adminq_poll_period_us": 10000, 00:17:47.127 "nvme_ioq_poll_period_us": 0, 00:17:47.127 "reconnect_delay_sec": 0, 00:17:47.127 "timeout_admin_us": 0, 00:17:47.127 "timeout_us": 0, 00:17:47.127 "transport_ack_timeout": 0, 00:17:47.127 "transport_retry_count": 4, 00:17:47.127 "transport_tos": 0 00:17:47.127 } 00:17:47.127 }, 00:17:47.127 { 00:17:47.127 "method": "bdev_nvme_attach_controller", 00:17:47.127 "params": { 00:17:47.127 "adrfam": "IPv4", 00:17:47.127 "ctrlr_loss_timeout_sec": 0, 00:17:47.127 "ddgst": false, 00:17:47.127 "fast_io_fail_timeout_sec": 0, 00:17:47.127 "hdgst": false, 00:17:47.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.127 "name": "TLSTEST", 00:17:47.127 "prchk_guard": false, 00:17:47.127 "prchk_reftag": false, 00:17:47.127 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:47.127 "reconnect_delay_sec": 0, 00:17:47.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.127 "traddr": "10.0.0.2", 00:17:47.127 "trsvcid": "4420", 00:17:47.127 "trtype": "TCP" 00:17:47.127 } 00:17:47.127 }, 00:17:47.127 { 00:17:47.127 "method": "bdev_nvme_set_hotplug", 00:17:47.127 "params": { 00:17:47.127 "enable": false, 00:17:47.127 "period_us": 100000 00:17:47.127 } 00:17:47.127 }, 00:17:47.127 { 00:17:47.127 "method": "bdev_wait_for_examine" 00:17:47.127 } 00:17:47.127 ] 00:17:47.127 }, 00:17:47.127 { 00:17:47.127 "subsystem": "nbd", 00:17:47.127 "config": [] 00:17:47.127 } 00:17:47.127 ] 00:17:47.127 }' 00:17:47.127 10:04:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.127 10:04:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.127 10:04:45 -- common/autotest_common.sh@10 -- # set +x 00:17:47.127 [2024-12-16 10:04:45.553568] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:47.127 [2024-12-16 10:04:45.553676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89641 ] 00:17:47.127 [2024-12-16 10:04:45.693392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.386 [2024-12-16 10:04:45.768392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.386 [2024-12-16 10:04:45.924797] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:47.952 10:04:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.952 10:04:46 -- common/autotest_common.sh@862 -- # return 0 00:17:47.952 10:04:46 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:48.211 Running I/O for 10 seconds... 00:17:58.184 00:17:58.184 Latency(us) 00:17:58.184 [2024-12-16T10:04:56.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.184 [2024-12-16T10:04:56.809Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:58.184 Verification LBA range: start 0x0 length 0x2000 00:17:58.184 TLSTESTn1 : 10.01 7266.22 28.38 0.00 0.00 17592.76 2278.87 20375.74 00:17:58.184 [2024-12-16T10:04:56.809Z] =================================================================================================================== 00:17:58.184 [2024-12-16T10:04:56.809Z] Total : 7266.22 28.38 0.00 0.00 17592.76 2278.87 20375.74 00:17:58.184 0 00:17:58.184 10:04:56 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:58.184 10:04:56 -- target/tls.sh@223 -- # killprocess 89641 00:17:58.184 10:04:56 -- common/autotest_common.sh@936 -- # '[' -z 89641 ']' 00:17:58.184 10:04:56 -- common/autotest_common.sh@940 -- # kill -0 89641 00:17:58.184 10:04:56 -- common/autotest_common.sh@941 -- # uname 00:17:58.184 10:04:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:58.184 10:04:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89641 00:17:58.184 10:04:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:58.184 10:04:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:58.184 killing process with pid 89641 00:17:58.184 10:04:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89641' 00:17:58.184 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.184 00:17:58.184 Latency(us) 00:17:58.184 [2024-12-16T10:04:56.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.184 [2024-12-16T10:04:56.809Z] =================================================================================================================== 00:17:58.184 [2024-12-16T10:04:56.809Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.184 10:04:56 -- common/autotest_common.sh@955 -- # kill 89641 00:17:58.184 10:04:56 -- common/autotest_common.sh@960 -- # wait 89641 00:17:58.443 10:04:56 -- target/tls.sh@224 -- # killprocess 89601 00:17:58.443 10:04:56 -- common/autotest_common.sh@936 -- # '[' -z 89601 ']' 00:17:58.443 10:04:56 -- common/autotest_common.sh@940 -- # kill -0 89601 00:17:58.443 10:04:56 -- common/autotest_common.sh@941 -- # uname 00:17:58.443 10:04:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:58.443 10:04:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89601 00:17:58.443 10:04:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:58.443 10:04:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:58.443 killing process with pid 89601 00:17:58.443 10:04:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89601' 00:17:58.443 10:04:56 -- common/autotest_common.sh@955 -- # kill 89601 00:17:58.443 10:04:56 -- common/autotest_common.sh@960 -- # wait 89601 00:17:58.702 10:04:57 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:58.702 10:04:57 -- target/tls.sh@227 -- # cleanup 00:17:58.702 10:04:57 -- target/tls.sh@15 -- # process_shm --id 0 00:17:58.702 10:04:57 -- common/autotest_common.sh@806 -- # type=--id 00:17:58.702 10:04:57 -- common/autotest_common.sh@807 -- # id=0 00:17:58.702 10:04:57 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:58.702 10:04:57 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:58.702 10:04:57 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:58.702 10:04:57 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:58.702 10:04:57 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:58.702 10:04:57 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:58.702 nvmf_trace.0 00:17:58.702 Process with pid 89641 is not found 00:17:58.702 10:04:57 -- common/autotest_common.sh@821 -- # return 0 00:17:58.702 10:04:57 -- target/tls.sh@16 -- # killprocess 89641 00:17:58.702 10:04:57 -- common/autotest_common.sh@936 -- # '[' -z 89641 ']' 00:17:58.702 10:04:57 -- common/autotest_common.sh@940 -- # kill -0 89641 00:17:58.702 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89641) - No such process 00:17:58.702 10:04:57 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89641 is not found' 00:17:58.702 10:04:57 -- target/tls.sh@17 -- # nvmftestfini 00:17:58.702 10:04:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:58.702 10:04:57 -- nvmf/common.sh@116 -- # sync 00:17:58.702 10:04:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:58.702 10:04:57 -- nvmf/common.sh@119 -- # set +e 00:17:58.702 10:04:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:58.702 10:04:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:58.702 rmmod nvme_tcp 00:17:58.702 rmmod nvme_fabrics 00:17:58.702 rmmod nvme_keyring 00:17:58.702 10:04:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:58.702 Process with pid 89601 is not found 00:17:58.702 10:04:57 -- nvmf/common.sh@123 -- # set -e 00:17:58.702 10:04:57 -- nvmf/common.sh@124 -- # return 0 00:17:58.702 10:04:57 -- nvmf/common.sh@477 -- # '[' -n 89601 ']' 00:17:58.702 10:04:57 -- nvmf/common.sh@478 -- # killprocess 89601 00:17:58.702 10:04:57 -- common/autotest_common.sh@936 -- # '[' -z 89601 ']' 00:17:58.702 10:04:57 -- common/autotest_common.sh@940 -- # kill -0 89601 00:17:58.702 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89601) - No such process 00:17:58.702 10:04:57 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89601 is not found' 00:17:58.702 10:04:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:58.702 10:04:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:58.702 10:04:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:58.702 10:04:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:58.702 10:04:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:58.702 10:04:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.702 10:04:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.702 10:04:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.961 10:04:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:58.961 10:04:57 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:58.961 00:17:58.961 real 1m10.826s 00:17:58.961 user 1m48.973s 00:17:58.961 sys 0m24.724s 00:17:58.961 10:04:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:58.961 10:04:57 -- common/autotest_common.sh@10 -- # set +x 00:17:58.961 ************************************ 00:17:58.961 END TEST nvmf_tls 00:17:58.961 ************************************ 00:17:58.961 10:04:57 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:58.961 10:04:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:58.961 10:04:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:58.961 10:04:57 -- common/autotest_common.sh@10 -- # set +x 00:17:58.961 ************************************ 00:17:58.961 START TEST nvmf_fips 00:17:58.961 ************************************ 00:17:58.961 10:04:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:58.961 * Looking for test storage... 00:17:58.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:58.961 10:04:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:58.961 10:04:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:58.961 10:04:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:58.961 10:04:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:58.961 10:04:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:58.961 10:04:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:58.961 10:04:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:58.961 10:04:57 -- scripts/common.sh@335 -- # IFS=.-: 00:17:58.961 10:04:57 -- scripts/common.sh@335 -- # read -ra ver1 00:17:58.961 10:04:57 -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.961 10:04:57 -- scripts/common.sh@336 -- # read -ra ver2 00:17:58.961 10:04:57 -- scripts/common.sh@337 -- # local 'op=<' 00:17:58.961 10:04:57 -- scripts/common.sh@339 -- # ver1_l=2 00:17:58.961 10:04:57 -- scripts/common.sh@340 -- # ver2_l=1 00:17:58.961 10:04:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:58.961 10:04:57 -- scripts/common.sh@343 -- # case "$op" in 00:17:58.961 10:04:57 -- scripts/common.sh@344 -- # : 1 00:17:58.961 10:04:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:58.961 10:04:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.961 10:04:57 -- scripts/common.sh@364 -- # decimal 1 00:17:58.961 10:04:57 -- scripts/common.sh@352 -- # local d=1 00:17:58.961 10:04:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.961 10:04:57 -- scripts/common.sh@354 -- # echo 1 00:17:58.961 10:04:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:58.961 10:04:57 -- scripts/common.sh@365 -- # decimal 2 00:17:58.961 10:04:57 -- scripts/common.sh@352 -- # local d=2 00:17:58.961 10:04:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.961 10:04:57 -- scripts/common.sh@354 -- # echo 2 00:17:58.961 10:04:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:58.961 10:04:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:58.961 10:04:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:58.961 10:04:57 -- scripts/common.sh@367 -- # return 0 00:17:58.961 10:04:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.961 10:04:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:58.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.961 --rc genhtml_branch_coverage=1 00:17:58.961 --rc genhtml_function_coverage=1 00:17:58.961 --rc genhtml_legend=1 00:17:58.961 --rc geninfo_all_blocks=1 00:17:58.961 --rc geninfo_unexecuted_blocks=1 00:17:58.961 00:17:58.961 ' 00:17:58.961 10:04:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:58.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.961 --rc genhtml_branch_coverage=1 00:17:58.961 --rc genhtml_function_coverage=1 00:17:58.961 --rc genhtml_legend=1 00:17:58.961 --rc geninfo_all_blocks=1 00:17:58.961 --rc geninfo_unexecuted_blocks=1 00:17:58.961 00:17:58.961 ' 00:17:58.961 10:04:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:58.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.961 --rc genhtml_branch_coverage=1 00:17:58.961 --rc genhtml_function_coverage=1 00:17:58.961 --rc genhtml_legend=1 00:17:58.961 --rc geninfo_all_blocks=1 00:17:58.961 --rc geninfo_unexecuted_blocks=1 00:17:58.961 00:17:58.961 ' 00:17:58.961 10:04:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:58.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.961 --rc genhtml_branch_coverage=1 00:17:58.961 --rc genhtml_function_coverage=1 00:17:58.961 --rc genhtml_legend=1 00:17:58.961 --rc geninfo_all_blocks=1 00:17:58.961 --rc geninfo_unexecuted_blocks=1 00:17:58.961 00:17:58.961 ' 00:17:58.961 10:04:57 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:58.961 10:04:57 -- nvmf/common.sh@7 -- # uname -s 00:17:58.961 10:04:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.961 10:04:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.961 10:04:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.961 10:04:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.961 10:04:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.961 10:04:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.961 10:04:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.961 10:04:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.961 10:04:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.961 10:04:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.961 10:04:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:17:58.961 10:04:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:17:58.961 10:04:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.961 10:04:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.961 10:04:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:58.961 10:04:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:58.961 10:04:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.961 10:04:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.962 10:04:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.962 10:04:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.962 10:04:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.962 10:04:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.962 10:04:57 -- paths/export.sh@5 -- # export PATH 00:17:58.962 10:04:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.962 10:04:57 -- nvmf/common.sh@46 -- # : 0 00:17:58.962 10:04:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:58.962 10:04:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:58.962 10:04:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:58.962 10:04:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.962 10:04:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.962 10:04:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:58.962 10:04:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:58.962 10:04:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:58.962 10:04:57 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.962 10:04:57 -- fips/fips.sh@89 -- # check_openssl_version 00:17:58.962 10:04:57 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:58.962 10:04:57 -- fips/fips.sh@85 -- # openssl version 00:17:58.962 10:04:57 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:59.221 10:04:57 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:17:59.221 10:04:57 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:59.221 10:04:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:59.221 10:04:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:59.221 10:04:57 -- scripts/common.sh@335 -- # IFS=.-: 00:17:59.221 10:04:57 -- scripts/common.sh@335 -- # read -ra ver1 00:17:59.221 10:04:57 -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.221 10:04:57 -- scripts/common.sh@336 -- # read -ra ver2 00:17:59.221 10:04:57 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:59.221 10:04:57 -- scripts/common.sh@339 -- # ver1_l=3 00:17:59.221 10:04:57 -- scripts/common.sh@340 -- # ver2_l=3 00:17:59.221 10:04:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:59.221 10:04:57 -- scripts/common.sh@343 -- # case "$op" in 00:17:59.221 10:04:57 -- scripts/common.sh@347 -- # : 1 00:17:59.221 10:04:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:59.221 10:04:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.221 10:04:57 -- scripts/common.sh@364 -- # decimal 3 00:17:59.221 10:04:57 -- scripts/common.sh@352 -- # local d=3 00:17:59.221 10:04:57 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:59.221 10:04:57 -- scripts/common.sh@354 -- # echo 3 00:17:59.221 10:04:57 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:59.221 10:04:57 -- scripts/common.sh@365 -- # decimal 3 00:17:59.221 10:04:57 -- scripts/common.sh@352 -- # local d=3 00:17:59.221 10:04:57 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:59.221 10:04:57 -- scripts/common.sh@354 -- # echo 3 00:17:59.221 10:04:57 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:59.221 10:04:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:59.221 10:04:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:59.221 10:04:57 -- scripts/common.sh@363 -- # (( v++ )) 00:17:59.221 10:04:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.221 10:04:57 -- scripts/common.sh@364 -- # decimal 1 00:17:59.221 10:04:57 -- scripts/common.sh@352 -- # local d=1 00:17:59.221 10:04:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.221 10:04:57 -- scripts/common.sh@354 -- # echo 1 00:17:59.221 10:04:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:59.221 10:04:57 -- scripts/common.sh@365 -- # decimal 0 00:17:59.221 10:04:57 -- scripts/common.sh@352 -- # local d=0 00:17:59.221 10:04:57 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:59.221 10:04:57 -- scripts/common.sh@354 -- # echo 0 00:17:59.221 10:04:57 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:59.221 10:04:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:59.221 10:04:57 -- scripts/common.sh@366 -- # return 0 00:17:59.221 10:04:57 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:59.221 10:04:57 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:59.221 10:04:57 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:59.221 10:04:57 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:59.221 10:04:57 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:59.221 10:04:57 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:59.221 10:04:57 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:59.221 10:04:57 -- fips/fips.sh@113 -- # build_openssl_config 00:17:59.221 10:04:57 -- fips/fips.sh@37 -- # cat 00:17:59.221 10:04:57 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:59.221 10:04:57 -- fips/fips.sh@58 -- # cat - 00:17:59.221 10:04:57 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:59.221 10:04:57 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:59.221 10:04:57 -- fips/fips.sh@116 -- # mapfile -t providers 00:17:59.221 10:04:57 -- fips/fips.sh@116 -- # grep name 00:17:59.221 10:04:57 -- fips/fips.sh@116 -- # openssl list -providers 00:17:59.221 10:04:57 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:59.221 10:04:57 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:59.221 10:04:57 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:59.221 10:04:57 -- fips/fips.sh@127 -- # : 00:17:59.221 10:04:57 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:59.221 10:04:57 -- common/autotest_common.sh@650 -- # local es=0 00:17:59.221 10:04:57 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:59.221 10:04:57 -- common/autotest_common.sh@638 -- # local arg=openssl 00:17:59.221 10:04:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.221 10:04:57 -- common/autotest_common.sh@642 -- # type -t openssl 00:17:59.221 10:04:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.221 10:04:57 -- common/autotest_common.sh@644 -- # type -P openssl 00:17:59.221 10:04:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.221 10:04:57 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:17:59.221 10:04:57 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:17:59.221 10:04:57 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:17:59.221 Error setting digest 00:17:59.221 40923B05607F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:59.221 40923B05607F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:59.221 10:04:57 -- common/autotest_common.sh@653 -- # es=1 00:17:59.221 10:04:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:59.221 10:04:57 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:59.221 10:04:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:59.221 10:04:57 -- fips/fips.sh@130 -- # nvmftestinit 00:17:59.221 10:04:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:59.221 10:04:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.221 10:04:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:59.221 10:04:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:59.221 10:04:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:59.221 10:04:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.221 10:04:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.221 10:04:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.221 10:04:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:59.221 10:04:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:59.221 10:04:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:59.221 10:04:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:59.221 10:04:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:59.221 10:04:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:59.221 10:04:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.221 10:04:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.221 10:04:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:59.221 10:04:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:59.221 10:04:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:59.221 10:04:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:59.221 10:04:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:59.221 10:04:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.221 10:04:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:59.221 10:04:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:59.221 10:04:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:59.221 10:04:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:59.221 10:04:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:59.221 10:04:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:59.221 Cannot find device "nvmf_tgt_br" 00:17:59.221 10:04:57 -- nvmf/common.sh@154 -- # true 00:17:59.221 10:04:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:59.221 Cannot find device "nvmf_tgt_br2" 00:17:59.221 10:04:57 -- nvmf/common.sh@155 -- # true 00:17:59.221 10:04:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:59.221 10:04:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:59.221 Cannot find device "nvmf_tgt_br" 00:17:59.221 10:04:57 -- nvmf/common.sh@157 -- # true 00:17:59.221 10:04:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:59.221 Cannot find device "nvmf_tgt_br2" 00:17:59.221 10:04:57 -- nvmf/common.sh@158 -- # true 00:17:59.221 10:04:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:59.481 10:04:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:59.481 10:04:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:59.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.481 10:04:57 -- nvmf/common.sh@161 -- # true 00:17:59.481 10:04:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:59.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.481 10:04:57 -- nvmf/common.sh@162 -- # true 00:17:59.481 10:04:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:59.481 10:04:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:59.481 10:04:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:59.481 10:04:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:59.481 10:04:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:59.481 10:04:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:59.481 10:04:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:59.481 10:04:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:59.481 10:04:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:59.481 10:04:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:59.481 10:04:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:59.481 10:04:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:59.481 10:04:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:59.481 10:04:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:59.481 10:04:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:59.481 10:04:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:59.481 10:04:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:59.481 10:04:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:59.481 10:04:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:59.481 10:04:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:59.481 10:04:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:59.481 10:04:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:59.481 10:04:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:59.481 10:04:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:59.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:17:59.481 00:17:59.481 --- 10.0.0.2 ping statistics --- 00:17:59.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.481 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:59.481 10:04:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:59.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:59.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:59.481 00:17:59.481 --- 10.0.0.3 ping statistics --- 00:17:59.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.481 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:59.481 10:04:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:59.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:59.481 00:17:59.481 --- 10.0.0.1 ping statistics --- 00:17:59.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.481 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:59.481 10:04:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.481 10:04:58 -- nvmf/common.sh@421 -- # return 0 00:17:59.481 10:04:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:59.481 10:04:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.481 10:04:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:59.481 10:04:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:59.481 10:04:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.481 10:04:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:59.481 10:04:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:59.481 10:04:58 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:59.481 10:04:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:59.481 10:04:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:59.481 10:04:58 -- common/autotest_common.sh@10 -- # set +x 00:17:59.481 10:04:58 -- nvmf/common.sh@469 -- # nvmfpid=90005 00:17:59.481 10:04:58 -- nvmf/common.sh@470 -- # waitforlisten 90005 00:17:59.481 10:04:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:59.481 10:04:58 -- common/autotest_common.sh@829 -- # '[' -z 90005 ']' 00:17:59.481 10:04:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.481 10:04:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.481 10:04:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.481 10:04:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.481 10:04:58 -- common/autotest_common.sh@10 -- # set +x 00:17:59.740 [2024-12-16 10:04:58.167390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:59.740 [2024-12-16 10:04:58.167473] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.740 [2024-12-16 10:04:58.299340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.998 [2024-12-16 10:04:58.375281] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:59.998 [2024-12-16 10:04:58.375439] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.998 [2024-12-16 10:04:58.375452] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.998 [2024-12-16 10:04:58.375462] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.998 [2024-12-16 10:04:58.375486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.565 10:04:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.565 10:04:59 -- common/autotest_common.sh@862 -- # return 0 00:18:00.565 10:04:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:00.565 10:04:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:00.565 10:04:59 -- common/autotest_common.sh@10 -- # set +x 00:18:00.824 10:04:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.824 10:04:59 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:00.824 10:04:59 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:00.824 10:04:59 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:00.824 10:04:59 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:00.824 10:04:59 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:00.824 10:04:59 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:00.824 10:04:59 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:00.824 10:04:59 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:01.082 [2024-12-16 10:04:59.463954] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.082 [2024-12-16 10:04:59.479926] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.082 [2024-12-16 10:04:59.480077] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.082 malloc0 00:18:01.082 10:04:59 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:01.082 10:04:59 -- fips/fips.sh@147 -- # bdevperf_pid=90061 00:18:01.082 10:04:59 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:01.082 10:04:59 -- fips/fips.sh@148 -- # waitforlisten 90061 /var/tmp/bdevperf.sock 00:18:01.082 10:04:59 -- common/autotest_common.sh@829 -- # '[' -z 90061 ']' 00:18:01.082 10:04:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.082 10:04:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.082 10:04:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.082 10:04:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.082 10:04:59 -- common/autotest_common.sh@10 -- # set +x 00:18:01.082 [2024-12-16 10:04:59.600588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:01.082 [2024-12-16 10:04:59.600648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90061 ] 00:18:01.340 [2024-12-16 10:04:59.731118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.340 [2024-12-16 10:04:59.790607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.275 10:05:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.275 10:05:00 -- common/autotest_common.sh@862 -- # return 0 00:18:02.275 10:05:00 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:02.275 [2024-12-16 10:05:00.838623] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.533 TLSTESTn1 00:18:02.533 10:05:00 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:02.533 Running I/O for 10 seconds... 00:18:12.505 00:18:12.505 Latency(us) 00:18:12.505 [2024-12-16T10:05:11.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.505 [2024-12-16T10:05:11.130Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:12.505 Verification LBA range: start 0x0 length 0x2000 00:18:12.505 TLSTESTn1 : 10.01 6768.93 26.44 0.00 0.00 18879.25 3172.54 19541.64 00:18:12.505 [2024-12-16T10:05:11.130Z] =================================================================================================================== 00:18:12.505 [2024-12-16T10:05:11.130Z] Total : 6768.93 26.44 0.00 0.00 18879.25 3172.54 19541.64 00:18:12.505 0 00:18:12.505 10:05:11 -- fips/fips.sh@1 -- # cleanup 00:18:12.505 10:05:11 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:12.505 10:05:11 -- common/autotest_common.sh@806 -- # type=--id 00:18:12.505 10:05:11 -- common/autotest_common.sh@807 -- # id=0 00:18:12.505 10:05:11 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:12.505 10:05:11 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:12.505 10:05:11 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:12.505 10:05:11 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:12.505 10:05:11 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:12.505 10:05:11 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:12.505 nvmf_trace.0 00:18:12.763 10:05:11 -- common/autotest_common.sh@821 -- # return 0 00:18:12.763 10:05:11 -- fips/fips.sh@16 -- # killprocess 90061 00:18:12.763 10:05:11 -- common/autotest_common.sh@936 -- # '[' -z 90061 ']' 00:18:12.763 10:05:11 -- common/autotest_common.sh@940 -- # kill -0 90061 00:18:12.763 10:05:11 -- common/autotest_common.sh@941 -- # uname 00:18:12.763 10:05:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:12.763 10:05:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90061 00:18:12.763 10:05:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:12.763 10:05:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:12.763 killing process with pid 90061 00:18:12.763 10:05:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90061' 00:18:12.763 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.763 00:18:12.763 Latency(us) 00:18:12.763 [2024-12-16T10:05:11.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.763 [2024-12-16T10:05:11.388Z] =================================================================================================================== 00:18:12.763 [2024-12-16T10:05:11.388Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.763 10:05:11 -- common/autotest_common.sh@955 -- # kill 90061 00:18:12.763 10:05:11 -- common/autotest_common.sh@960 -- # wait 90061 00:18:13.021 10:05:11 -- fips/fips.sh@17 -- # nvmftestfini 00:18:13.021 10:05:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:13.021 10:05:11 -- nvmf/common.sh@116 -- # sync 00:18:13.021 10:05:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:13.021 10:05:11 -- nvmf/common.sh@119 -- # set +e 00:18:13.021 10:05:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:13.021 10:05:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:13.021 rmmod nvme_tcp 00:18:13.021 rmmod nvme_fabrics 00:18:13.021 rmmod nvme_keyring 00:18:13.021 10:05:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:13.021 10:05:11 -- nvmf/common.sh@123 -- # set -e 00:18:13.021 10:05:11 -- nvmf/common.sh@124 -- # return 0 00:18:13.021 10:05:11 -- nvmf/common.sh@477 -- # '[' -n 90005 ']' 00:18:13.021 10:05:11 -- nvmf/common.sh@478 -- # killprocess 90005 00:18:13.021 10:05:11 -- common/autotest_common.sh@936 -- # '[' -z 90005 ']' 00:18:13.021 10:05:11 -- common/autotest_common.sh@940 -- # kill -0 90005 00:18:13.021 10:05:11 -- common/autotest_common.sh@941 -- # uname 00:18:13.021 10:05:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:13.021 10:05:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90005 00:18:13.021 10:05:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:13.021 10:05:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:13.021 killing process with pid 90005 00:18:13.021 10:05:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90005' 00:18:13.021 10:05:11 -- common/autotest_common.sh@955 -- # kill 90005 00:18:13.021 10:05:11 -- common/autotest_common.sh@960 -- # wait 90005 00:18:13.281 10:05:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:13.281 10:05:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:13.281 10:05:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:13.281 10:05:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.281 10:05:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:13.281 10:05:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.281 10:05:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.281 10:05:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.281 10:05:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:13.281 10:05:11 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:13.281 00:18:13.281 real 0m14.377s 00:18:13.281 user 0m19.412s 00:18:13.281 sys 0m5.858s 00:18:13.281 10:05:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:13.281 10:05:11 -- common/autotest_common.sh@10 -- # set +x 00:18:13.281 ************************************ 00:18:13.281 END TEST nvmf_fips 00:18:13.281 ************************************ 00:18:13.281 10:05:11 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:13.281 10:05:11 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:13.281 10:05:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:13.281 10:05:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:13.281 10:05:11 -- common/autotest_common.sh@10 -- # set +x 00:18:13.281 ************************************ 00:18:13.281 START TEST nvmf_fuzz 00:18:13.281 ************************************ 00:18:13.281 10:05:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:13.281 * Looking for test storage... 00:18:13.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:13.281 10:05:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:13.282 10:05:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:13.282 10:05:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:13.541 10:05:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:13.541 10:05:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:13.541 10:05:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:13.541 10:05:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:13.541 10:05:11 -- scripts/common.sh@335 -- # IFS=.-: 00:18:13.541 10:05:11 -- scripts/common.sh@335 -- # read -ra ver1 00:18:13.541 10:05:11 -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.541 10:05:11 -- scripts/common.sh@336 -- # read -ra ver2 00:18:13.541 10:05:11 -- scripts/common.sh@337 -- # local 'op=<' 00:18:13.541 10:05:11 -- scripts/common.sh@339 -- # ver1_l=2 00:18:13.541 10:05:11 -- scripts/common.sh@340 -- # ver2_l=1 00:18:13.541 10:05:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:13.541 10:05:11 -- scripts/common.sh@343 -- # case "$op" in 00:18:13.541 10:05:11 -- scripts/common.sh@344 -- # : 1 00:18:13.541 10:05:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:13.541 10:05:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.541 10:05:11 -- scripts/common.sh@364 -- # decimal 1 00:18:13.541 10:05:11 -- scripts/common.sh@352 -- # local d=1 00:18:13.541 10:05:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.541 10:05:11 -- scripts/common.sh@354 -- # echo 1 00:18:13.541 10:05:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:13.541 10:05:11 -- scripts/common.sh@365 -- # decimal 2 00:18:13.541 10:05:11 -- scripts/common.sh@352 -- # local d=2 00:18:13.541 10:05:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.541 10:05:11 -- scripts/common.sh@354 -- # echo 2 00:18:13.541 10:05:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:13.541 10:05:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:13.541 10:05:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:13.541 10:05:11 -- scripts/common.sh@367 -- # return 0 00:18:13.541 10:05:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.541 10:05:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:13.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.541 --rc genhtml_branch_coverage=1 00:18:13.541 --rc genhtml_function_coverage=1 00:18:13.541 --rc genhtml_legend=1 00:18:13.541 --rc geninfo_all_blocks=1 00:18:13.541 --rc geninfo_unexecuted_blocks=1 00:18:13.541 00:18:13.541 ' 00:18:13.541 10:05:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:13.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.541 --rc genhtml_branch_coverage=1 00:18:13.541 --rc genhtml_function_coverage=1 00:18:13.541 --rc genhtml_legend=1 00:18:13.541 --rc geninfo_all_blocks=1 00:18:13.541 --rc geninfo_unexecuted_blocks=1 00:18:13.541 00:18:13.541 ' 00:18:13.541 10:05:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:13.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.541 --rc genhtml_branch_coverage=1 00:18:13.541 --rc genhtml_function_coverage=1 00:18:13.541 --rc genhtml_legend=1 00:18:13.541 --rc geninfo_all_blocks=1 00:18:13.541 --rc geninfo_unexecuted_blocks=1 00:18:13.541 00:18:13.541 ' 00:18:13.541 10:05:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:13.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.541 --rc genhtml_branch_coverage=1 00:18:13.541 --rc genhtml_function_coverage=1 00:18:13.541 --rc genhtml_legend=1 00:18:13.541 --rc geninfo_all_blocks=1 00:18:13.541 --rc geninfo_unexecuted_blocks=1 00:18:13.541 00:18:13.541 ' 00:18:13.541 10:05:11 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:13.541 10:05:11 -- nvmf/common.sh@7 -- # uname -s 00:18:13.541 10:05:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.541 10:05:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.542 10:05:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.542 10:05:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.542 10:05:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.542 10:05:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.542 10:05:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.542 10:05:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.542 10:05:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.542 10:05:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.542 10:05:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:18:13.542 10:05:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:18:13.542 10:05:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.542 10:05:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.542 10:05:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:13.542 10:05:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:13.542 10:05:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.542 10:05:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.542 10:05:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.542 10:05:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.542 10:05:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.542 10:05:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.542 10:05:12 -- paths/export.sh@5 -- # export PATH 00:18:13.542 10:05:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.542 10:05:12 -- nvmf/common.sh@46 -- # : 0 00:18:13.542 10:05:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:13.542 10:05:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:13.542 10:05:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:13.542 10:05:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.542 10:05:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.542 10:05:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:13.542 10:05:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:13.542 10:05:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:13.542 10:05:12 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:13.542 10:05:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:13.542 10:05:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.542 10:05:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:13.542 10:05:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:13.542 10:05:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:13.542 10:05:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.542 10:05:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.542 10:05:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.542 10:05:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:13.542 10:05:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:13.542 10:05:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:13.542 10:05:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:13.542 10:05:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:13.542 10:05:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:13.542 10:05:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.542 10:05:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.542 10:05:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:13.542 10:05:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:13.542 10:05:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:13.542 10:05:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:13.542 10:05:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:13.542 10:05:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.542 10:05:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:13.542 10:05:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:13.542 10:05:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:13.542 10:05:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:13.542 10:05:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:13.542 10:05:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:13.542 Cannot find device "nvmf_tgt_br" 00:18:13.542 10:05:12 -- nvmf/common.sh@154 -- # true 00:18:13.542 10:05:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.542 Cannot find device "nvmf_tgt_br2" 00:18:13.542 10:05:12 -- nvmf/common.sh@155 -- # true 00:18:13.542 10:05:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:13.542 10:05:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:13.542 Cannot find device "nvmf_tgt_br" 00:18:13.542 10:05:12 -- nvmf/common.sh@157 -- # true 00:18:13.542 10:05:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:13.542 Cannot find device "nvmf_tgt_br2" 00:18:13.542 10:05:12 -- nvmf/common.sh@158 -- # true 00:18:13.542 10:05:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:13.542 10:05:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:13.542 10:05:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.542 10:05:12 -- nvmf/common.sh@161 -- # true 00:18:13.542 10:05:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.542 10:05:12 -- nvmf/common.sh@162 -- # true 00:18:13.542 10:05:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:13.542 10:05:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:13.542 10:05:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:13.542 10:05:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:13.801 10:05:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:13.801 10:05:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:13.801 10:05:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:13.801 10:05:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:13.801 10:05:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:13.801 10:05:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:13.801 10:05:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:13.801 10:05:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:13.801 10:05:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:13.801 10:05:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:13.801 10:05:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:13.801 10:05:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:13.801 10:05:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:13.801 10:05:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:13.801 10:05:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:13.801 10:05:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:13.801 10:05:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:13.801 10:05:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:13.801 10:05:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:13.801 10:05:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:13.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:18:13.801 00:18:13.801 --- 10.0.0.2 ping statistics --- 00:18:13.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.801 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:13.801 10:05:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:13.801 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:13.801 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:18:13.801 00:18:13.801 --- 10.0.0.3 ping statistics --- 00:18:13.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.801 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:13.801 10:05:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:13.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:13.801 00:18:13.801 --- 10.0.0.1 ping statistics --- 00:18:13.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.801 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:13.801 10:05:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.801 10:05:12 -- nvmf/common.sh@421 -- # return 0 00:18:13.801 10:05:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:13.801 10:05:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.801 10:05:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:13.801 10:05:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:13.801 10:05:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.801 10:05:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:13.801 10:05:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:13.801 10:05:12 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=90413 00:18:13.801 10:05:12 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:13.801 10:05:12 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:13.801 10:05:12 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 90413 00:18:13.801 10:05:12 -- common/autotest_common.sh@829 -- # '[' -z 90413 ']' 00:18:13.801 10:05:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.801 10:05:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.801 10:05:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.802 10:05:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.802 10:05:12 -- common/autotest_common.sh@10 -- # set +x 00:18:14.737 10:05:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.737 10:05:13 -- common/autotest_common.sh@862 -- # return 0 00:18:14.737 10:05:13 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:14.737 10:05:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.737 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:18:14.737 10:05:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.737 10:05:13 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:14.737 10:05:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.737 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:18:14.996 Malloc0 00:18:14.996 10:05:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.996 10:05:13 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:14.996 10:05:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.996 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:18:14.996 10:05:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.996 10:05:13 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:14.996 10:05:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.996 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:18:14.996 10:05:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.996 10:05:13 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.996 10:05:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.996 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:18:14.996 10:05:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.996 10:05:13 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:14.996 10:05:13 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:15.255 Shutting down the fuzz application 00:18:15.255 10:05:13 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:15.514 Shutting down the fuzz application 00:18:15.514 10:05:14 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.514 10:05:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.514 10:05:14 -- common/autotest_common.sh@10 -- # set +x 00:18:15.514 10:05:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.514 10:05:14 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:15.514 10:05:14 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:15.514 10:05:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:15.514 10:05:14 -- nvmf/common.sh@116 -- # sync 00:18:15.514 10:05:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:15.514 10:05:14 -- nvmf/common.sh@119 -- # set +e 00:18:15.514 10:05:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:15.514 10:05:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:15.514 rmmod nvme_tcp 00:18:15.514 rmmod nvme_fabrics 00:18:15.514 rmmod nvme_keyring 00:18:15.514 10:05:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:15.514 10:05:14 -- nvmf/common.sh@123 -- # set -e 00:18:15.514 10:05:14 -- nvmf/common.sh@124 -- # return 0 00:18:15.514 10:05:14 -- nvmf/common.sh@477 -- # '[' -n 90413 ']' 00:18:15.514 10:05:14 -- nvmf/common.sh@478 -- # killprocess 90413 00:18:15.514 10:05:14 -- common/autotest_common.sh@936 -- # '[' -z 90413 ']' 00:18:15.514 10:05:14 -- common/autotest_common.sh@940 -- # kill -0 90413 00:18:15.773 10:05:14 -- common/autotest_common.sh@941 -- # uname 00:18:15.773 10:05:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:15.773 10:05:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90413 00:18:15.773 10:05:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:15.773 10:05:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:15.773 10:05:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90413' 00:18:15.773 killing process with pid 90413 00:18:15.773 10:05:14 -- common/autotest_common.sh@955 -- # kill 90413 00:18:15.773 10:05:14 -- common/autotest_common.sh@960 -- # wait 90413 00:18:15.773 10:05:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:15.773 10:05:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:15.773 10:05:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:15.773 10:05:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.773 10:05:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:15.773 10:05:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.773 10:05:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.773 10:05:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.032 10:05:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:16.032 10:05:14 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:16.032 00:18:16.032 real 0m2.613s 00:18:16.032 user 0m2.586s 00:18:16.032 sys 0m0.690s 00:18:16.032 10:05:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:16.032 10:05:14 -- common/autotest_common.sh@10 -- # set +x 00:18:16.032 ************************************ 00:18:16.032 END TEST nvmf_fuzz 00:18:16.032 ************************************ 00:18:16.032 10:05:14 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:16.032 10:05:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:16.032 10:05:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:16.032 10:05:14 -- common/autotest_common.sh@10 -- # set +x 00:18:16.032 ************************************ 00:18:16.032 START TEST nvmf_multiconnection 00:18:16.032 ************************************ 00:18:16.032 10:05:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:16.032 * Looking for test storage... 00:18:16.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:16.032 10:05:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:16.032 10:05:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:16.032 10:05:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:16.032 10:05:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:16.032 10:05:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:16.032 10:05:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:16.032 10:05:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:16.032 10:05:14 -- scripts/common.sh@335 -- # IFS=.-: 00:18:16.032 10:05:14 -- scripts/common.sh@335 -- # read -ra ver1 00:18:16.032 10:05:14 -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.032 10:05:14 -- scripts/common.sh@336 -- # read -ra ver2 00:18:16.032 10:05:14 -- scripts/common.sh@337 -- # local 'op=<' 00:18:16.032 10:05:14 -- scripts/common.sh@339 -- # ver1_l=2 00:18:16.032 10:05:14 -- scripts/common.sh@340 -- # ver2_l=1 00:18:16.032 10:05:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:16.032 10:05:14 -- scripts/common.sh@343 -- # case "$op" in 00:18:16.032 10:05:14 -- scripts/common.sh@344 -- # : 1 00:18:16.032 10:05:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:16.032 10:05:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.032 10:05:14 -- scripts/common.sh@364 -- # decimal 1 00:18:16.032 10:05:14 -- scripts/common.sh@352 -- # local d=1 00:18:16.032 10:05:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.032 10:05:14 -- scripts/common.sh@354 -- # echo 1 00:18:16.032 10:05:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:16.032 10:05:14 -- scripts/common.sh@365 -- # decimal 2 00:18:16.291 10:05:14 -- scripts/common.sh@352 -- # local d=2 00:18:16.291 10:05:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.291 10:05:14 -- scripts/common.sh@354 -- # echo 2 00:18:16.291 10:05:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:16.291 10:05:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:16.291 10:05:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:16.291 10:05:14 -- scripts/common.sh@367 -- # return 0 00:18:16.291 10:05:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.291 10:05:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:16.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.291 --rc genhtml_branch_coverage=1 00:18:16.291 --rc genhtml_function_coverage=1 00:18:16.291 --rc genhtml_legend=1 00:18:16.291 --rc geninfo_all_blocks=1 00:18:16.291 --rc geninfo_unexecuted_blocks=1 00:18:16.291 00:18:16.291 ' 00:18:16.291 10:05:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:16.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.291 --rc genhtml_branch_coverage=1 00:18:16.291 --rc genhtml_function_coverage=1 00:18:16.291 --rc genhtml_legend=1 00:18:16.291 --rc geninfo_all_blocks=1 00:18:16.291 --rc geninfo_unexecuted_blocks=1 00:18:16.291 00:18:16.291 ' 00:18:16.291 10:05:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:16.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.291 --rc genhtml_branch_coverage=1 00:18:16.291 --rc genhtml_function_coverage=1 00:18:16.291 --rc genhtml_legend=1 00:18:16.291 --rc geninfo_all_blocks=1 00:18:16.291 --rc geninfo_unexecuted_blocks=1 00:18:16.291 00:18:16.291 ' 00:18:16.291 10:05:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:16.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.291 --rc genhtml_branch_coverage=1 00:18:16.292 --rc genhtml_function_coverage=1 00:18:16.292 --rc genhtml_legend=1 00:18:16.292 --rc geninfo_all_blocks=1 00:18:16.292 --rc geninfo_unexecuted_blocks=1 00:18:16.292 00:18:16.292 ' 00:18:16.292 10:05:14 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:16.292 10:05:14 -- nvmf/common.sh@7 -- # uname -s 00:18:16.292 10:05:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.292 10:05:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.292 10:05:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.292 10:05:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.292 10:05:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.292 10:05:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.292 10:05:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.292 10:05:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.292 10:05:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.292 10:05:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.292 10:05:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:18:16.292 10:05:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:18:16.292 10:05:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.292 10:05:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.292 10:05:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:16.292 10:05:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.292 10:05:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.292 10:05:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.292 10:05:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.292 10:05:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.292 10:05:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.292 10:05:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.292 10:05:14 -- paths/export.sh@5 -- # export PATH 00:18:16.292 10:05:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.292 10:05:14 -- nvmf/common.sh@46 -- # : 0 00:18:16.292 10:05:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:16.292 10:05:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:16.292 10:05:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:16.292 10:05:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.292 10:05:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.292 10:05:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:16.292 10:05:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:16.292 10:05:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:16.292 10:05:14 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:16.292 10:05:14 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:16.292 10:05:14 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:16.292 10:05:14 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:16.292 10:05:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:16.292 10:05:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.292 10:05:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:16.292 10:05:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:16.292 10:05:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:16.292 10:05:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.292 10:05:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.292 10:05:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.292 10:05:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:16.292 10:05:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:16.292 10:05:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:16.292 10:05:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:16.292 10:05:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:16.292 10:05:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:16.292 10:05:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.292 10:05:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.292 10:05:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:16.292 10:05:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:16.292 10:05:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:16.292 10:05:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:16.292 10:05:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:16.292 10:05:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.292 10:05:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:16.292 10:05:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:16.292 10:05:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:16.292 10:05:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:16.292 10:05:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:16.292 10:05:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:16.292 Cannot find device "nvmf_tgt_br" 00:18:16.292 10:05:14 -- nvmf/common.sh@154 -- # true 00:18:16.292 10:05:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.292 Cannot find device "nvmf_tgt_br2" 00:18:16.292 10:05:14 -- nvmf/common.sh@155 -- # true 00:18:16.292 10:05:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:16.292 10:05:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:16.292 Cannot find device "nvmf_tgt_br" 00:18:16.292 10:05:14 -- nvmf/common.sh@157 -- # true 00:18:16.292 10:05:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:16.292 Cannot find device "nvmf_tgt_br2" 00:18:16.292 10:05:14 -- nvmf/common.sh@158 -- # true 00:18:16.292 10:05:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:16.292 10:05:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:16.292 10:05:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.292 10:05:14 -- nvmf/common.sh@161 -- # true 00:18:16.292 10:05:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.292 10:05:14 -- nvmf/common.sh@162 -- # true 00:18:16.292 10:05:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:16.292 10:05:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:16.292 10:05:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:16.292 10:05:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:16.292 10:05:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:16.292 10:05:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:16.292 10:05:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:16.551 10:05:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:16.551 10:05:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:16.551 10:05:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:16.551 10:05:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:16.551 10:05:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:16.551 10:05:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:16.551 10:05:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:16.551 10:05:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:16.551 10:05:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:16.551 10:05:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:16.551 10:05:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:16.551 10:05:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:16.551 10:05:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:16.551 10:05:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:16.551 10:05:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:16.551 10:05:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:16.551 10:05:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:16.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:18:16.551 00:18:16.551 --- 10.0.0.2 ping statistics --- 00:18:16.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.551 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:18:16.551 10:05:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:16.551 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:16.551 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:18:16.551 00:18:16.551 --- 10.0.0.3 ping statistics --- 00:18:16.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.551 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:16.551 10:05:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:16.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:18:16.551 00:18:16.551 --- 10.0.0.1 ping statistics --- 00:18:16.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.551 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:16.551 10:05:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.551 10:05:15 -- nvmf/common.sh@421 -- # return 0 00:18:16.552 10:05:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:16.552 10:05:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.552 10:05:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:16.552 10:05:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:16.552 10:05:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.552 10:05:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:16.552 10:05:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:16.552 10:05:15 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:16.552 10:05:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:16.552 10:05:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:16.552 10:05:15 -- common/autotest_common.sh@10 -- # set +x 00:18:16.552 10:05:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:16.552 10:05:15 -- nvmf/common.sh@469 -- # nvmfpid=90621 00:18:16.552 10:05:15 -- nvmf/common.sh@470 -- # waitforlisten 90621 00:18:16.552 10:05:15 -- common/autotest_common.sh@829 -- # '[' -z 90621 ']' 00:18:16.552 10:05:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.552 10:05:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.552 10:05:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.552 10:05:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.552 10:05:15 -- common/autotest_common.sh@10 -- # set +x 00:18:16.552 [2024-12-16 10:05:15.092644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:16.552 [2024-12-16 10:05:15.092748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.810 [2024-12-16 10:05:15.231533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:16.810 [2024-12-16 10:05:15.289199] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:16.810 [2024-12-16 10:05:15.289355] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.810 [2024-12-16 10:05:15.289381] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.810 [2024-12-16 10:05:15.289407] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.810 [2024-12-16 10:05:15.290632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.810 [2024-12-16 10:05:15.290687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.810 [2024-12-16 10:05:15.290830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.810 [2024-12-16 10:05:15.290836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.744 10:05:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.745 10:05:16 -- common/autotest_common.sh@862 -- # return 0 00:18:17.745 10:05:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:17.745 10:05:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:17.745 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:17.745 10:05:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.745 10:05:16 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:17.745 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.745 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:17.745 [2024-12-16 10:05:16.180637] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.745 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.745 10:05:16 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:17.745 10:05:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.745 10:05:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:17.745 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.745 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:17.745 Malloc1 00:18:17.745 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.745 10:05:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:17.745 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.745 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:17.745 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.745 10:05:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:17.745 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.745 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:17.745 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.745 10:05:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.745 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.745 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:17.745 [2024-12-16 10:05:16.258482] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.745 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.745 10:05:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.745 10:05:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:17.745 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.745 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:17.745 Malloc2 00:18:17.745 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.745 10:05:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:17.745 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.745 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:17.745 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.745 10:05:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:17.745 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.745 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:17.745 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.745 10:05:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:17.745 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.745 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:17.745 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.745 10:05:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.745 10:05:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:17.745 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.745 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:17.745 Malloc3 00:18:17.745 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.745 10:05:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:17.745 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.745 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:17.745 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.745 10:05:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:17.745 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.745 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:17.745 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.745 10:05:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:17.745 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.745 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:17.745 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.745 10:05:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.745 10:05:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:17.745 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.745 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 Malloc4 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.004 10:05:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 Malloc5 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.004 10:05:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 Malloc6 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.004 10:05:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 Malloc7 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.004 10:05:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 Malloc8 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:18.004 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.004 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.004 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.004 10:05:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.004 10:05:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:18.005 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.005 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.264 Malloc9 00:18:18.264 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.264 10:05:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:18.264 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.264 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.264 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.264 10:05:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:18.264 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.264 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.264 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.264 10:05:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:18.264 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.264 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.264 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.264 10:05:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.264 10:05:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:18.264 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.264 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.264 Malloc10 00:18:18.264 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.264 10:05:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:18.264 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.264 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.264 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.264 10:05:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:18.264 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.264 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.264 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.264 10:05:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:18.264 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.264 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.264 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.264 10:05:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.264 10:05:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:18.264 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.264 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.264 Malloc11 00:18:18.264 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.264 10:05:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:18.264 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.264 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.264 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.264 10:05:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:18.264 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.264 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.264 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.264 10:05:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:18.264 10:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.264 10:05:16 -- common/autotest_common.sh@10 -- # set +x 00:18:18.264 10:05:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.264 10:05:16 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:18.264 10:05:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.264 10:05:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:18.522 10:05:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:18.522 10:05:16 -- common/autotest_common.sh@1187 -- # local i=0 00:18:18.522 10:05:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.522 10:05:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:18.522 10:05:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:20.425 10:05:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:20.425 10:05:18 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:18:20.425 10:05:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:20.425 10:05:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:20.425 10:05:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.425 10:05:18 -- common/autotest_common.sh@1197 -- # return 0 00:18:20.425 10:05:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.425 10:05:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:20.684 10:05:19 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:20.684 10:05:19 -- common/autotest_common.sh@1187 -- # local i=0 00:18:20.684 10:05:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:20.684 10:05:19 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:20.684 10:05:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:22.589 10:05:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:22.589 10:05:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:22.589 10:05:21 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:18:22.589 10:05:21 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:22.589 10:05:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.589 10:05:21 -- common/autotest_common.sh@1197 -- # return 0 00:18:22.589 10:05:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:22.589 10:05:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:22.848 10:05:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:22.848 10:05:21 -- common/autotest_common.sh@1187 -- # local i=0 00:18:22.848 10:05:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:22.848 10:05:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:22.848 10:05:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:24.752 10:05:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:24.752 10:05:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:24.752 10:05:23 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:24.752 10:05:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:24.753 10:05:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:24.753 10:05:23 -- common/autotest_common.sh@1197 -- # return 0 00:18:24.753 10:05:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:24.753 10:05:23 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:25.011 10:05:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:25.011 10:05:23 -- common/autotest_common.sh@1187 -- # local i=0 00:18:25.011 10:05:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:25.011 10:05:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:25.011 10:05:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:27.541 10:05:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:27.541 10:05:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:27.542 10:05:25 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:27.542 10:05:25 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:27.542 10:05:25 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.542 10:05:25 -- common/autotest_common.sh@1197 -- # return 0 00:18:27.542 10:05:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:27.542 10:05:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:27.542 10:05:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:27.542 10:05:25 -- common/autotest_common.sh@1187 -- # local i=0 00:18:27.542 10:05:25 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.542 10:05:25 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:27.542 10:05:25 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:29.470 10:05:27 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:29.470 10:05:27 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:29.470 10:05:27 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:29.470 10:05:27 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:29.470 10:05:27 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:29.470 10:05:27 -- common/autotest_common.sh@1197 -- # return 0 00:18:29.470 10:05:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:29.470 10:05:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:29.470 10:05:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:29.470 10:05:27 -- common/autotest_common.sh@1187 -- # local i=0 00:18:29.470 10:05:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:29.470 10:05:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:29.470 10:05:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:31.372 10:05:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:31.372 10:05:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:31.372 10:05:29 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:31.372 10:05:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:31.372 10:05:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:31.372 10:05:29 -- common/autotest_common.sh@1197 -- # return 0 00:18:31.372 10:05:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.372 10:05:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:31.631 10:05:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:31.631 10:05:30 -- common/autotest_common.sh@1187 -- # local i=0 00:18:31.631 10:05:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:31.631 10:05:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:31.631 10:05:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:33.534 10:05:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:33.534 10:05:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:33.534 10:05:32 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:18:33.534 10:05:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:33.534 10:05:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:33.534 10:05:32 -- common/autotest_common.sh@1197 -- # return 0 00:18:33.534 10:05:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:33.534 10:05:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:33.793 10:05:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:33.793 10:05:32 -- common/autotest_common.sh@1187 -- # local i=0 00:18:33.793 10:05:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:33.793 10:05:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:33.793 10:05:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:36.327 10:05:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:36.327 10:05:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:36.327 10:05:34 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:18:36.327 10:05:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:36.327 10:05:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.327 10:05:34 -- common/autotest_common.sh@1197 -- # return 0 00:18:36.327 10:05:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.327 10:05:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:36.327 10:05:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:36.327 10:05:34 -- common/autotest_common.sh@1187 -- # local i=0 00:18:36.327 10:05:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:36.327 10:05:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:36.327 10:05:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:38.232 10:05:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:38.232 10:05:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:38.232 10:05:36 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:18:38.232 10:05:36 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:38.232 10:05:36 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:38.232 10:05:36 -- common/autotest_common.sh@1197 -- # return 0 00:18:38.232 10:05:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:38.232 10:05:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:38.232 10:05:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:38.232 10:05:36 -- common/autotest_common.sh@1187 -- # local i=0 00:18:38.232 10:05:36 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:38.232 10:05:36 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:38.232 10:05:36 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:40.135 10:05:38 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:40.394 10:05:38 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:40.394 10:05:38 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:18:40.394 10:05:38 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:40.394 10:05:38 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:40.394 10:05:38 -- common/autotest_common.sh@1197 -- # return 0 00:18:40.394 10:05:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.394 10:05:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:40.394 10:05:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:40.394 10:05:38 -- common/autotest_common.sh@1187 -- # local i=0 00:18:40.394 10:05:38 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:40.394 10:05:38 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:40.394 10:05:38 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:42.926 10:05:40 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:42.926 10:05:40 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:42.926 10:05:40 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:18:42.926 10:05:40 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:42.926 10:05:40 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:42.926 10:05:40 -- common/autotest_common.sh@1197 -- # return 0 00:18:42.926 10:05:40 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:42.926 [global] 00:18:42.926 thread=1 00:18:42.926 invalidate=1 00:18:42.926 rw=read 00:18:42.926 time_based=1 00:18:42.926 runtime=10 00:18:42.926 ioengine=libaio 00:18:42.926 direct=1 00:18:42.926 bs=262144 00:18:42.926 iodepth=64 00:18:42.926 norandommap=1 00:18:42.926 numjobs=1 00:18:42.926 00:18:42.926 [job0] 00:18:42.926 filename=/dev/nvme0n1 00:18:42.926 [job1] 00:18:42.926 filename=/dev/nvme10n1 00:18:42.926 [job2] 00:18:42.926 filename=/dev/nvme1n1 00:18:42.926 [job3] 00:18:42.926 filename=/dev/nvme2n1 00:18:42.926 [job4] 00:18:42.926 filename=/dev/nvme3n1 00:18:42.926 [job5] 00:18:42.926 filename=/dev/nvme4n1 00:18:42.926 [job6] 00:18:42.926 filename=/dev/nvme5n1 00:18:42.926 [job7] 00:18:42.926 filename=/dev/nvme6n1 00:18:42.926 [job8] 00:18:42.926 filename=/dev/nvme7n1 00:18:42.926 [job9] 00:18:42.926 filename=/dev/nvme8n1 00:18:42.926 [job10] 00:18:42.926 filename=/dev/nvme9n1 00:18:42.926 Could not set queue depth (nvme0n1) 00:18:42.926 Could not set queue depth (nvme10n1) 00:18:42.926 Could not set queue depth (nvme1n1) 00:18:42.926 Could not set queue depth (nvme2n1) 00:18:42.926 Could not set queue depth (nvme3n1) 00:18:42.926 Could not set queue depth (nvme4n1) 00:18:42.926 Could not set queue depth (nvme5n1) 00:18:42.926 Could not set queue depth (nvme6n1) 00:18:42.926 Could not set queue depth (nvme7n1) 00:18:42.926 Could not set queue depth (nvme8n1) 00:18:42.926 Could not set queue depth (nvme9n1) 00:18:42.926 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.926 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.926 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.926 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.926 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.926 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.926 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.926 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.926 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.926 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.926 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.926 fio-3.35 00:18:42.926 Starting 11 threads 00:18:55.136 00:18:55.136 job0: (groupid=0, jobs=1): err= 0: pid=91103: Mon Dec 16 10:05:51 2024 00:18:55.136 read: IOPS=603, BW=151MiB/s (158MB/s)(1520MiB/10082msec) 00:18:55.136 slat (usec): min=18, max=92846, avg=1570.24, stdev=5736.98 00:18:55.136 clat (msec): min=5, max=244, avg=104.36, stdev=38.53 00:18:55.136 lat (msec): min=5, max=244, avg=105.93, stdev=39.28 00:18:55.136 clat percentiles (msec): 00:18:55.136 | 1.00th=[ 14], 5.00th=[ 30], 10.00th=[ 57], 20.00th=[ 85], 00:18:55.136 | 30.00th=[ 92], 40.00th=[ 97], 50.00th=[ 103], 60.00th=[ 111], 00:18:55.136 | 70.00th=[ 120], 80.00th=[ 128], 90.00th=[ 148], 95.00th=[ 171], 00:18:55.136 | 99.00th=[ 224], 99.50th=[ 230], 99.90th=[ 236], 99.95th=[ 245], 00:18:55.136 | 99.99th=[ 245] 00:18:55.136 bw ( KiB/s): min=92160, max=318464, per=8.13%, avg=153956.20, stdev=49063.88, samples=20 00:18:55.136 iops : min= 360, max= 1244, avg=601.30, stdev=191.66, samples=20 00:18:55.136 lat (msec) : 10=0.53%, 20=1.28%, 50=7.80%, 100=35.51%, 250=54.88% 00:18:55.136 cpu : usr=0.21%, sys=1.91%, ctx=1235, majf=0, minf=4097 00:18:55.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:55.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.136 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.136 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.136 job1: (groupid=0, jobs=1): err= 0: pid=91104: Mon Dec 16 10:05:51 2024 00:18:55.136 read: IOPS=813, BW=203MiB/s (213MB/s)(2051MiB/10083msec) 00:18:55.136 slat (usec): min=14, max=96364, avg=1137.94, stdev=4816.57 00:18:55.136 clat (usec): min=712, max=258799, avg=77361.80, stdev=41607.20 00:18:55.136 lat (usec): min=1658, max=268914, avg=78499.74, stdev=42365.16 00:18:55.136 clat percentiles (msec): 00:18:55.136 | 1.00th=[ 8], 5.00th=[ 20], 10.00th=[ 28], 20.00th=[ 35], 00:18:55.136 | 30.00th=[ 47], 40.00th=[ 67], 50.00th=[ 77], 60.00th=[ 92], 00:18:55.136 | 70.00th=[ 99], 80.00th=[ 107], 90.00th=[ 131], 95.00th=[ 157], 00:18:55.136 | 99.00th=[ 190], 99.50th=[ 197], 99.90th=[ 213], 99.95th=[ 220], 00:18:55.136 | 99.99th=[ 259] 00:18:55.136 bw ( KiB/s): min=108032, max=504846, per=11.00%, avg=208384.70, stdev=103785.14, samples=20 00:18:55.136 iops : min= 422, max= 1972, avg=814.00, stdev=405.40, samples=20 00:18:55.136 lat (usec) : 750=0.01% 00:18:55.136 lat (msec) : 2=0.02%, 4=0.11%, 10=2.10%, 20=2.91%, 50=25.59% 00:18:55.136 lat (msec) : 100=41.49%, 250=27.73%, 500=0.04% 00:18:55.136 cpu : usr=0.34%, sys=2.41%, ctx=1631, majf=0, minf=4097 00:18:55.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:55.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.136 issued rwts: total=8205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.136 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.136 job2: (groupid=0, jobs=1): err= 0: pid=91105: Mon Dec 16 10:05:51 2024 00:18:55.136 read: IOPS=554, BW=139MiB/s (145MB/s)(1398MiB/10073msec) 00:18:55.136 slat (usec): min=14, max=90327, avg=1608.76, stdev=6296.35 00:18:55.136 clat (msec): min=30, max=245, avg=113.52, stdev=34.99 00:18:55.136 lat (msec): min=30, max=274, avg=115.13, stdev=35.87 00:18:55.136 clat percentiles (msec): 00:18:55.136 | 1.00th=[ 47], 5.00th=[ 68], 10.00th=[ 80], 20.00th=[ 88], 00:18:55.136 | 30.00th=[ 94], 40.00th=[ 99], 50.00th=[ 104], 60.00th=[ 113], 00:18:55.136 | 70.00th=[ 127], 80.00th=[ 140], 90.00th=[ 165], 95.00th=[ 188], 00:18:55.136 | 99.00th=[ 205], 99.50th=[ 211], 99.90th=[ 224], 99.95th=[ 226], 00:18:55.136 | 99.99th=[ 245] 00:18:55.136 bw ( KiB/s): min=82432, max=186368, per=7.46%, avg=141384.50, stdev=32956.59, samples=20 00:18:55.136 iops : min= 322, max= 728, avg=552.25, stdev=128.76, samples=20 00:18:55.136 lat (msec) : 50=1.93%, 100=41.72%, 250=56.35% 00:18:55.136 cpu : usr=0.18%, sys=1.87%, ctx=1165, majf=0, minf=4097 00:18:55.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:55.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.136 issued rwts: total=5590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.136 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.136 job3: (groupid=0, jobs=1): err= 0: pid=91106: Mon Dec 16 10:05:51 2024 00:18:55.136 read: IOPS=513, BW=128MiB/s (135MB/s)(1294MiB/10076msec) 00:18:55.136 slat (usec): min=21, max=114766, avg=1927.63, stdev=7115.87 00:18:55.136 clat (msec): min=15, max=306, avg=122.47, stdev=38.46 00:18:55.136 lat (msec): min=16, max=306, avg=124.39, stdev=39.40 00:18:55.136 clat percentiles (msec): 00:18:55.136 | 1.00th=[ 42], 5.00th=[ 85], 10.00th=[ 90], 20.00th=[ 95], 00:18:55.136 | 30.00th=[ 100], 40.00th=[ 105], 50.00th=[ 111], 60.00th=[ 121], 00:18:55.136 | 70.00th=[ 132], 80.00th=[ 148], 90.00th=[ 180], 95.00th=[ 197], 00:18:55.136 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 300], 99.95th=[ 300], 00:18:55.136 | 99.99th=[ 309] 00:18:55.136 bw ( KiB/s): min=74091, max=174754, per=6.91%, avg=130895.05, stdev=31564.32, samples=20 00:18:55.136 iops : min= 289, max= 682, avg=511.20, stdev=123.26, samples=20 00:18:55.136 lat (msec) : 20=0.15%, 50=1.35%, 100=30.62%, 250=66.83%, 500=1.04% 00:18:55.136 cpu : usr=0.15%, sys=1.77%, ctx=1028, majf=0, minf=4097 00:18:55.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:55.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.136 issued rwts: total=5177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.136 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.136 job4: (groupid=0, jobs=1): err= 0: pid=91107: Mon Dec 16 10:05:51 2024 00:18:55.136 read: IOPS=843, BW=211MiB/s (221MB/s)(2126MiB/10075msec) 00:18:55.136 slat (usec): min=16, max=107542, avg=1150.70, stdev=4878.45 00:18:55.136 clat (msec): min=4, max=275, avg=74.53, stdev=41.19 00:18:55.136 lat (msec): min=4, max=283, avg=75.68, stdev=41.91 00:18:55.136 clat percentiles (msec): 00:18:55.136 | 1.00th=[ 18], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 32], 00:18:55.136 | 30.00th=[ 37], 40.00th=[ 53], 50.00th=[ 84], 60.00th=[ 94], 00:18:55.136 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 123], 95.00th=[ 132], 00:18:55.136 | 99.00th=[ 176], 99.50th=[ 255], 99.90th=[ 275], 99.95th=[ 275], 00:18:55.136 | 99.99th=[ 275] 00:18:55.136 bw ( KiB/s): min=124928, max=529920, per=11.40%, avg=216023.55, stdev=125142.73, samples=20 00:18:55.136 iops : min= 488, max= 2070, avg=843.70, stdev=488.92, samples=20 00:18:55.136 lat (msec) : 10=0.21%, 20=1.52%, 50=36.98%, 100=32.11%, 250=28.66% 00:18:55.136 lat (msec) : 500=0.53% 00:18:55.136 cpu : usr=0.23%, sys=2.86%, ctx=1892, majf=0, minf=4097 00:18:55.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:55.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.136 issued rwts: total=8503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.136 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.136 job5: (groupid=0, jobs=1): err= 0: pid=91108: Mon Dec 16 10:05:51 2024 00:18:55.136 read: IOPS=670, BW=168MiB/s (176MB/s)(1693MiB/10101msec) 00:18:55.136 slat (usec): min=16, max=118452, avg=1419.68, stdev=5453.72 00:18:55.136 clat (usec): min=1885, max=258472, avg=93878.44, stdev=38008.49 00:18:55.136 lat (usec): min=1918, max=258500, avg=95298.13, stdev=38783.27 00:18:55.136 clat percentiles (msec): 00:18:55.136 | 1.00th=[ 14], 5.00th=[ 30], 10.00th=[ 35], 20.00th=[ 59], 00:18:55.136 | 30.00th=[ 82], 40.00th=[ 93], 50.00th=[ 99], 60.00th=[ 105], 00:18:55.136 | 70.00th=[ 113], 80.00th=[ 125], 90.00th=[ 140], 95.00th=[ 148], 00:18:55.136 | 99.00th=[ 188], 99.50th=[ 209], 99.90th=[ 224], 99.95th=[ 224], 00:18:55.136 | 99.99th=[ 259] 00:18:55.136 bw ( KiB/s): min=96768, max=407040, per=9.06%, avg=171687.55, stdev=67180.01, samples=20 00:18:55.136 iops : min= 378, max= 1590, avg=670.50, stdev=262.48, samples=20 00:18:55.136 lat (msec) : 2=0.01%, 4=0.10%, 10=0.28%, 20=1.06%, 50=15.98% 00:18:55.136 lat (msec) : 100=34.94%, 250=47.60%, 500=0.01% 00:18:55.136 cpu : usr=0.29%, sys=2.21%, ctx=1200, majf=0, minf=4097 00:18:55.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:55.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.136 issued rwts: total=6771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.136 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.136 job6: (groupid=0, jobs=1): err= 0: pid=91109: Mon Dec 16 10:05:51 2024 00:18:55.136 read: IOPS=517, BW=129MiB/s (136MB/s)(1304MiB/10076msec) 00:18:55.136 slat (usec): min=20, max=124285, avg=1841.17, stdev=7136.18 00:18:55.136 clat (msec): min=26, max=220, avg=121.44, stdev=32.27 00:18:55.136 lat (msec): min=27, max=328, avg=123.28, stdev=33.33 00:18:55.136 clat percentiles (msec): 00:18:55.136 | 1.00th=[ 65], 5.00th=[ 84], 10.00th=[ 90], 20.00th=[ 96], 00:18:55.136 | 30.00th=[ 102], 40.00th=[ 107], 50.00th=[ 113], 60.00th=[ 123], 00:18:55.136 | 70.00th=[ 131], 80.00th=[ 148], 90.00th=[ 174], 95.00th=[ 190], 00:18:55.136 | 99.00th=[ 205], 99.50th=[ 207], 99.90th=[ 220], 99.95th=[ 222], 00:18:55.136 | 99.99th=[ 222] 00:18:55.136 bw ( KiB/s): min=71168, max=169984, per=6.96%, avg=131915.60, stdev=27240.23, samples=20 00:18:55.136 iops : min= 278, max= 664, avg=515.15, stdev=106.34, samples=20 00:18:55.136 lat (msec) : 50=0.44%, 100=27.39%, 250=72.17% 00:18:55.136 cpu : usr=0.21%, sys=1.91%, ctx=1113, majf=0, minf=4097 00:18:55.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:55.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.136 issued rwts: total=5217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.136 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.136 job7: (groupid=0, jobs=1): err= 0: pid=91110: Mon Dec 16 10:05:51 2024 00:18:55.136 read: IOPS=920, BW=230MiB/s (241MB/s)(2306MiB/10018msec) 00:18:55.136 slat (usec): min=14, max=157389, avg=998.90, stdev=5465.04 00:18:55.137 clat (usec): min=652, max=333625, avg=68391.53, stdev=47888.09 00:18:55.137 lat (usec): min=744, max=341305, avg=69390.43, stdev=48797.18 00:18:55.137 clat percentiles (msec): 00:18:55.137 | 1.00th=[ 5], 5.00th=[ 20], 10.00th=[ 25], 20.00th=[ 29], 00:18:55.137 | 30.00th=[ 34], 40.00th=[ 40], 50.00th=[ 56], 60.00th=[ 66], 00:18:55.137 | 70.00th=[ 81], 80.00th=[ 110], 90.00th=[ 144], 95.00th=[ 178], 00:18:55.137 | 99.00th=[ 199], 99.50th=[ 207], 99.90th=[ 213], 99.95th=[ 220], 00:18:55.137 | 99.99th=[ 334] 00:18:55.137 bw ( KiB/s): min=76288, max=513024, per=12.38%, avg=234505.60, stdev=146290.47, samples=20 00:18:55.137 iops : min= 298, max= 2004, avg=915.90, stdev=571.34, samples=20 00:18:55.137 lat (usec) : 750=0.03%, 1000=0.05% 00:18:55.137 lat (msec) : 2=0.59%, 4=0.27%, 10=0.42%, 20=3.93%, 50=42.00% 00:18:55.137 lat (msec) : 100=29.34%, 250=23.34%, 500=0.03% 00:18:55.137 cpu : usr=0.34%, sys=2.94%, ctx=1830, majf=0, minf=4097 00:18:55.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:55.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.137 issued rwts: total=9222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.137 job8: (groupid=0, jobs=1): err= 0: pid=91112: Mon Dec 16 10:05:51 2024 00:18:55.137 read: IOPS=645, BW=161MiB/s (169MB/s)(1627MiB/10082msec) 00:18:55.137 slat (usec): min=20, max=73599, avg=1508.41, stdev=5144.36 00:18:55.137 clat (msec): min=22, max=204, avg=97.53, stdev=25.48 00:18:55.137 lat (msec): min=22, max=225, avg=99.03, stdev=26.16 00:18:55.137 clat percentiles (msec): 00:18:55.137 | 1.00th=[ 37], 5.00th=[ 58], 10.00th=[ 65], 20.00th=[ 74], 00:18:55.137 | 30.00th=[ 87], 40.00th=[ 94], 50.00th=[ 99], 60.00th=[ 104], 00:18:55.137 | 70.00th=[ 110], 80.00th=[ 116], 90.00th=[ 128], 95.00th=[ 138], 00:18:55.137 | 99.00th=[ 171], 99.50th=[ 174], 99.90th=[ 192], 99.95th=[ 205], 00:18:55.137 | 99.99th=[ 205] 00:18:55.137 bw ( KiB/s): min=117760, max=243200, per=8.70%, avg=164888.95, stdev=32121.29, samples=20 00:18:55.137 iops : min= 460, max= 950, avg=643.95, stdev=125.52, samples=20 00:18:55.137 lat (msec) : 50=2.80%, 100=51.01%, 250=46.19% 00:18:55.137 cpu : usr=0.36%, sys=2.21%, ctx=1205, majf=0, minf=4097 00:18:55.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:55.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.137 issued rwts: total=6508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.137 job9: (groupid=0, jobs=1): err= 0: pid=91113: Mon Dec 16 10:05:51 2024 00:18:55.137 read: IOPS=743, BW=186MiB/s (195MB/s)(1877MiB/10097msec) 00:18:55.137 slat (usec): min=21, max=91165, avg=1258.26, stdev=5200.76 00:18:55.137 clat (usec): min=778, max=271206, avg=84632.54, stdev=49942.77 00:18:55.137 lat (usec): min=851, max=292192, avg=85890.79, stdev=50846.05 00:18:55.137 clat percentiles (msec): 00:18:55.137 | 1.00th=[ 21], 5.00th=[ 26], 10.00th=[ 31], 20.00th=[ 37], 00:18:55.137 | 30.00th=[ 52], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 78], 00:18:55.137 | 70.00th=[ 117], 80.00th=[ 132], 90.00th=[ 159], 95.00th=[ 184], 00:18:55.137 | 99.00th=[ 207], 99.50th=[ 215], 99.90th=[ 251], 99.95th=[ 257], 00:18:55.137 | 99.99th=[ 271] 00:18:55.137 bw ( KiB/s): min=75264, max=474112, per=10.06%, avg=190644.85, stdev=110539.30, samples=20 00:18:55.137 iops : min= 294, max= 1852, avg=744.55, stdev=431.81, samples=20 00:18:55.137 lat (usec) : 1000=0.01% 00:18:55.137 lat (msec) : 2=0.01%, 4=0.16%, 10=0.04%, 20=0.52%, 50=28.04% 00:18:55.137 lat (msec) : 100=35.03%, 250=36.04%, 500=0.15% 00:18:55.137 cpu : usr=0.27%, sys=2.36%, ctx=1481, majf=0, minf=4097 00:18:55.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:55.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.137 issued rwts: total=7508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.137 job10: (groupid=0, jobs=1): err= 0: pid=91114: Mon Dec 16 10:05:51 2024 00:18:55.137 read: IOPS=591, BW=148MiB/s (155MB/s)(1490MiB/10071msec) 00:18:55.137 slat (usec): min=14, max=76134, avg=1571.61, stdev=5909.61 00:18:55.137 clat (msec): min=4, max=245, avg=106.41, stdev=44.13 00:18:55.137 lat (msec): min=4, max=279, avg=107.99, stdev=45.07 00:18:55.137 clat percentiles (msec): 00:18:55.137 | 1.00th=[ 8], 5.00th=[ 30], 10.00th=[ 54], 20.00th=[ 79], 00:18:55.137 | 30.00th=[ 91], 40.00th=[ 96], 50.00th=[ 102], 60.00th=[ 110], 00:18:55.137 | 70.00th=[ 123], 80.00th=[ 138], 90.00th=[ 171], 95.00th=[ 190], 00:18:55.137 | 99.00th=[ 211], 99.50th=[ 224], 99.90th=[ 243], 99.95th=[ 245], 00:18:55.137 | 99.99th=[ 245] 00:18:55.137 bw ( KiB/s): min=76288, max=338432, per=7.96%, avg=150867.05, stdev=62560.49, samples=20 00:18:55.137 iops : min= 298, max= 1322, avg=589.20, stdev=244.37, samples=20 00:18:55.137 lat (msec) : 10=1.66%, 20=1.51%, 50=6.26%, 100=38.26%, 250=52.31% 00:18:55.137 cpu : usr=0.22%, sys=1.83%, ctx=1263, majf=0, minf=4097 00:18:55.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:55.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:55.137 issued rwts: total=5959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:55.137 00:18:55.137 Run status group 0 (all jobs): 00:18:55.137 READ: bw=1850MiB/s (1940MB/s), 128MiB/s-230MiB/s (135MB/s-241MB/s), io=18.2GiB (19.6GB), run=10018-10101msec 00:18:55.137 00:18:55.137 Disk stats (read/write): 00:18:55.137 nvme0n1: ios=12033/0, merge=0/0, ticks=1233944/0, in_queue=1233944, util=97.16% 00:18:55.137 nvme10n1: ios=16299/0, merge=0/0, ticks=1233994/0, in_queue=1233994, util=97.37% 00:18:55.137 nvme1n1: ios=11052/0, merge=0/0, ticks=1240957/0, in_queue=1240957, util=97.49% 00:18:55.137 nvme2n1: ios=10257/0, merge=0/0, ticks=1241682/0, in_queue=1241682, util=97.95% 00:18:55.137 nvme3n1: ios=16878/0, merge=0/0, ticks=1233865/0, in_queue=1233865, util=97.86% 00:18:55.137 nvme4n1: ios=13427/0, merge=0/0, ticks=1235566/0, in_queue=1235566, util=97.97% 00:18:55.137 nvme5n1: ios=10318/0, merge=0/0, ticks=1241164/0, in_queue=1241164, util=98.25% 00:18:55.137 nvme6n1: ios=18404/0, merge=0/0, ticks=1235853/0, in_queue=1235853, util=98.21% 00:18:55.137 nvme7n1: ios=12889/0, merge=0/0, ticks=1240226/0, in_queue=1240226, util=98.52% 00:18:55.137 nvme8n1: ios=14913/0, merge=0/0, ticks=1235309/0, in_queue=1235309, util=98.45% 00:18:55.137 nvme9n1: ios=11791/0, merge=0/0, ticks=1239229/0, in_queue=1239229, util=98.50% 00:18:55.137 10:05:51 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:55.137 [global] 00:18:55.137 thread=1 00:18:55.137 invalidate=1 00:18:55.137 rw=randwrite 00:18:55.137 time_based=1 00:18:55.137 runtime=10 00:18:55.137 ioengine=libaio 00:18:55.137 direct=1 00:18:55.137 bs=262144 00:18:55.137 iodepth=64 00:18:55.137 norandommap=1 00:18:55.137 numjobs=1 00:18:55.137 00:18:55.137 [job0] 00:18:55.137 filename=/dev/nvme0n1 00:18:55.137 [job1] 00:18:55.137 filename=/dev/nvme10n1 00:18:55.137 [job2] 00:18:55.137 filename=/dev/nvme1n1 00:18:55.137 [job3] 00:18:55.137 filename=/dev/nvme2n1 00:18:55.137 [job4] 00:18:55.137 filename=/dev/nvme3n1 00:18:55.137 [job5] 00:18:55.137 filename=/dev/nvme4n1 00:18:55.137 [job6] 00:18:55.137 filename=/dev/nvme5n1 00:18:55.137 [job7] 00:18:55.137 filename=/dev/nvme6n1 00:18:55.137 [job8] 00:18:55.137 filename=/dev/nvme7n1 00:18:55.137 [job9] 00:18:55.137 filename=/dev/nvme8n1 00:18:55.137 [job10] 00:18:55.137 filename=/dev/nvme9n1 00:18:55.137 Could not set queue depth (nvme0n1) 00:18:55.137 Could not set queue depth (nvme10n1) 00:18:55.137 Could not set queue depth (nvme1n1) 00:18:55.137 Could not set queue depth (nvme2n1) 00:18:55.137 Could not set queue depth (nvme3n1) 00:18:55.137 Could not set queue depth (nvme4n1) 00:18:55.137 Could not set queue depth (nvme5n1) 00:18:55.137 Could not set queue depth (nvme6n1) 00:18:55.137 Could not set queue depth (nvme7n1) 00:18:55.137 Could not set queue depth (nvme8n1) 00:18:55.137 Could not set queue depth (nvme9n1) 00:18:55.137 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.137 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.137 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.137 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.137 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.137 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.137 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.137 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.137 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.137 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.137 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:55.137 fio-3.35 00:18:55.137 Starting 11 threads 00:19:05.113 00:19:05.113 job0: (groupid=0, jobs=1): err= 0: pid=91310: Mon Dec 16 10:06:02 2024 00:19:05.113 write: IOPS=746, BW=187MiB/s (196MB/s)(1879MiB/10069msec); 0 zone resets 00:19:05.113 slat (usec): min=19, max=19195, avg=1326.09, stdev=2272.13 00:19:05.113 clat (msec): min=15, max=176, avg=84.41, stdev=13.61 00:19:05.113 lat (msec): min=15, max=176, avg=85.73, stdev=13.62 00:19:05.113 clat percentiles (msec): 00:19:05.113 | 1.00th=[ 77], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 79], 00:19:05.113 | 30.00th=[ 82], 40.00th=[ 82], 50.00th=[ 82], 60.00th=[ 83], 00:19:05.113 | 70.00th=[ 83], 80.00th=[ 84], 90.00th=[ 85], 95.00th=[ 121], 00:19:05.113 | 99.00th=[ 142], 99.50th=[ 167], 99.90th=[ 178], 99.95th=[ 178], 00:19:05.113 | 99.99th=[ 178] 00:19:05.113 bw ( KiB/s): min=116456, max=201728, per=12.18%, avg=190737.30, stdev=23263.86, samples=20 00:19:05.114 iops : min= 454, max= 788, avg=745.00, stdev=91.02, samples=20 00:19:05.114 lat (msec) : 20=0.04%, 50=0.27%, 100=91.30%, 250=8.40% 00:19:05.114 cpu : usr=1.48%, sys=2.01%, ctx=10374, majf=0, minf=1 00:19:05.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:05.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.114 issued rwts: total=0,7514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.114 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.114 job1: (groupid=0, jobs=1): err= 0: pid=91311: Mon Dec 16 10:06:02 2024 00:19:05.114 write: IOPS=837, BW=209MiB/s (220MB/s)(2108MiB/10068msec); 0 zone resets 00:19:05.114 slat (usec): min=18, max=7864, avg=1181.44, stdev=2027.34 00:19:05.114 clat (msec): min=7, max=150, avg=75.22, stdev=16.03 00:19:05.114 lat (msec): min=7, max=150, avg=76.40, stdev=16.18 00:19:05.114 clat percentiles (msec): 00:19:05.114 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 78], 00:19:05.114 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 84], 60.00th=[ 84], 00:19:05.114 | 70.00th=[ 85], 80.00th=[ 85], 90.00th=[ 85], 95.00th=[ 86], 00:19:05.114 | 99.00th=[ 87], 99.50th=[ 92], 99.90th=[ 140], 99.95th=[ 146], 00:19:05.114 | 99.99th=[ 150] 00:19:05.114 bw ( KiB/s): min=189440, max=367104, per=13.67%, avg=214201.30, stdev=52680.42, samples=20 00:19:05.114 iops : min= 740, max= 1434, avg=836.70, stdev=205.79, samples=20 00:19:05.114 lat (msec) : 10=0.05%, 20=0.19%, 50=18.56%, 100=80.75%, 250=0.45% 00:19:05.114 cpu : usr=2.49%, sys=2.01%, ctx=7429, majf=0, minf=1 00:19:05.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:05.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.114 issued rwts: total=0,8431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.114 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.114 job2: (groupid=0, jobs=1): err= 0: pid=91323: Mon Dec 16 10:06:02 2024 00:19:05.114 write: IOPS=538, BW=135MiB/s (141MB/s)(1362MiB/10111msec); 0 zone resets 00:19:05.114 slat (usec): min=19, max=13900, avg=1799.25, stdev=3125.14 00:19:05.114 clat (msec): min=6, max=230, avg=116.97, stdev=13.86 00:19:05.114 lat (msec): min=6, max=230, avg=118.77, stdev=13.77 00:19:05.114 clat percentiles (msec): 00:19:05.114 | 1.00th=[ 52], 5.00th=[ 111], 10.00th=[ 112], 20.00th=[ 114], 00:19:05.114 | 30.00th=[ 118], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 120], 00:19:05.114 | 70.00th=[ 121], 80.00th=[ 122], 90.00th=[ 124], 95.00th=[ 125], 00:19:05.114 | 99.00th=[ 148], 99.50th=[ 178], 99.90th=[ 224], 99.95th=[ 224], 00:19:05.114 | 99.99th=[ 230] 00:19:05.114 bw ( KiB/s): min=131584, max=166220, per=8.80%, avg=137807.90, stdev=7002.30, samples=20 00:19:05.114 iops : min= 514, max= 649, avg=538.25, stdev=27.31, samples=20 00:19:05.114 lat (msec) : 10=0.06%, 20=0.13%, 50=0.68%, 100=2.46%, 250=96.68% 00:19:05.114 cpu : usr=1.29%, sys=1.61%, ctx=2287, majf=0, minf=1 00:19:05.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:05.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.114 issued rwts: total=0,5446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.114 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.114 job3: (groupid=0, jobs=1): err= 0: pid=91324: Mon Dec 16 10:06:02 2024 00:19:05.114 write: IOPS=364, BW=91.1MiB/s (95.5MB/s)(925MiB/10152msec); 0 zone resets 00:19:05.114 slat (usec): min=19, max=85344, avg=2699.90, stdev=4789.13 00:19:05.114 clat (msec): min=8, max=323, avg=172.88, stdev=16.98 00:19:05.114 lat (msec): min=8, max=323, avg=175.58, stdev=16.49 00:19:05.114 clat percentiles (msec): 00:19:05.114 | 1.00th=[ 161], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 165], 00:19:05.114 | 30.00th=[ 174], 40.00th=[ 174], 50.00th=[ 174], 60.00th=[ 174], 00:19:05.114 | 70.00th=[ 176], 80.00th=[ 176], 90.00th=[ 178], 95.00th=[ 180], 00:19:05.114 | 99.00th=[ 249], 99.50th=[ 268], 99.90th=[ 313], 99.95th=[ 326], 00:19:05.114 | 99.99th=[ 326] 00:19:05.114 bw ( KiB/s): min=76288, max=96256, per=5.94%, avg=93056.00, stdev=4172.34, samples=20 00:19:05.114 iops : min= 298, max= 376, avg=363.50, stdev=16.30, samples=20 00:19:05.114 lat (msec) : 10=0.03%, 50=0.32%, 100=0.22%, 250=98.65%, 500=0.78% 00:19:05.114 cpu : usr=0.81%, sys=1.11%, ctx=5716, majf=0, minf=1 00:19:05.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:19:05.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.114 issued rwts: total=0,3698,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.114 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.114 job4: (groupid=0, jobs=1): err= 0: pid=91325: Mon Dec 16 10:06:02 2024 00:19:05.114 write: IOPS=384, BW=96.2MiB/s (101MB/s)(976MiB/10144msec); 0 zone resets 00:19:05.114 slat (usec): min=25, max=27879, avg=2530.49, stdev=4418.15 00:19:05.114 clat (msec): min=30, max=320, avg=163.78, stdev=24.20 00:19:05.114 lat (msec): min=30, max=320, avg=166.31, stdev=24.22 00:19:05.114 clat percentiles (msec): 00:19:05.114 | 1.00th=[ 85], 5.00th=[ 115], 10.00th=[ 122], 20.00th=[ 163], 00:19:05.114 | 30.00th=[ 165], 40.00th=[ 171], 50.00th=[ 174], 60.00th=[ 174], 00:19:05.114 | 70.00th=[ 174], 80.00th=[ 176], 90.00th=[ 176], 95.00th=[ 178], 00:19:05.114 | 99.00th=[ 211], 99.50th=[ 266], 99.90th=[ 309], 99.95th=[ 321], 00:19:05.114 | 99.99th=[ 321] 00:19:05.114 bw ( KiB/s): min=90624, max=135168, per=6.27%, avg=98269.20, stdev=11133.20, samples=20 00:19:05.114 iops : min= 354, max= 528, avg=383.85, stdev=43.50, samples=20 00:19:05.114 lat (msec) : 50=0.26%, 100=1.18%, 250=97.90%, 500=0.67% 00:19:05.114 cpu : usr=1.00%, sys=1.21%, ctx=6209, majf=0, minf=1 00:19:05.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:05.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.114 issued rwts: total=0,3902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.114 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.114 job5: (groupid=0, jobs=1): err= 0: pid=91326: Mon Dec 16 10:06:02 2024 00:19:05.114 write: IOPS=535, BW=134MiB/s (140MB/s)(1354MiB/10114msec); 0 zone resets 00:19:05.114 slat (usec): min=18, max=12418, avg=1793.06, stdev=3110.10 00:19:05.114 clat (msec): min=18, max=234, avg=117.62, stdev=12.18 00:19:05.114 lat (msec): min=18, max=234, avg=119.41, stdev=11.98 00:19:05.114 clat percentiles (msec): 00:19:05.114 | 1.00th=[ 64], 5.00th=[ 111], 10.00th=[ 112], 20.00th=[ 114], 00:19:05.114 | 30.00th=[ 117], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 120], 00:19:05.114 | 70.00th=[ 121], 80.00th=[ 122], 90.00th=[ 124], 95.00th=[ 125], 00:19:05.114 | 99.00th=[ 150], 99.50th=[ 182], 99.90th=[ 228], 99.95th=[ 228], 00:19:05.114 | 99.99th=[ 234] 00:19:05.114 bw ( KiB/s): min=132096, max=151760, per=8.75%, avg=137033.70, stdev=3948.29, samples=20 00:19:05.114 iops : min= 516, max= 592, avg=535.20, stdev=15.29, samples=20 00:19:05.114 lat (msec) : 20=0.04%, 50=0.65%, 100=2.05%, 250=97.27% 00:19:05.114 cpu : usr=1.29%, sys=1.80%, ctx=3727, majf=0, minf=1 00:19:05.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:05.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.114 issued rwts: total=0,5417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.114 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.114 job6: (groupid=0, jobs=1): err= 0: pid=91327: Mon Dec 16 10:06:02 2024 00:19:05.114 write: IOPS=541, BW=135MiB/s (142MB/s)(1369MiB/10113msec); 0 zone resets 00:19:05.114 slat (usec): min=18, max=12970, avg=1820.68, stdev=3109.87 00:19:05.114 clat (msec): min=4, max=234, avg=116.34, stdev=13.75 00:19:05.114 lat (msec): min=4, max=234, avg=118.16, stdev=13.61 00:19:05.114 clat percentiles (msec): 00:19:05.114 | 1.00th=[ 61], 5.00th=[ 108], 10.00th=[ 112], 20.00th=[ 113], 00:19:05.114 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 120], 00:19:05.114 | 70.00th=[ 121], 80.00th=[ 122], 90.00th=[ 123], 95.00th=[ 125], 00:19:05.114 | 99.00th=[ 129], 99.50th=[ 182], 99.90th=[ 220], 99.95th=[ 228], 00:19:05.114 | 99.99th=[ 234] 00:19:05.114 bw ( KiB/s): min=132096, max=175967, per=8.84%, avg=138551.05, stdev=8994.47, samples=20 00:19:05.114 iops : min= 516, max= 687, avg=541.15, stdev=35.06, samples=20 00:19:05.114 lat (msec) : 10=0.09%, 20=0.07%, 50=0.60%, 100=3.49%, 250=95.74% 00:19:05.114 cpu : usr=1.25%, sys=1.76%, ctx=3440, majf=0, minf=1 00:19:05.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:05.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.114 issued rwts: total=0,5475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.114 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.114 job7: (groupid=0, jobs=1): err= 0: pid=91329: Mon Dec 16 10:06:02 2024 00:19:05.114 write: IOPS=366, BW=91.7MiB/s (96.1MB/s)(930MiB/10147msec); 0 zone resets 00:19:05.114 slat (usec): min=21, max=22105, avg=2682.63, stdev=4606.44 00:19:05.114 clat (msec): min=23, max=322, avg=171.80, stdev=15.66 00:19:05.114 lat (msec): min=23, max=322, avg=174.48, stdev=15.19 00:19:05.114 clat percentiles (msec): 00:19:05.114 | 1.00th=[ 123], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 165], 00:19:05.114 | 30.00th=[ 174], 40.00th=[ 174], 50.00th=[ 174], 60.00th=[ 174], 00:19:05.114 | 70.00th=[ 176], 80.00th=[ 176], 90.00th=[ 178], 95.00th=[ 178], 00:19:05.114 | 99.00th=[ 222], 99.50th=[ 268], 99.90th=[ 313], 99.95th=[ 321], 00:19:05.114 | 99.99th=[ 321] 00:19:05.114 bw ( KiB/s): min=90112, max=96256, per=5.98%, avg=93610.00, stdev=1596.44, samples=20 00:19:05.114 iops : min= 352, max= 376, avg=365.65, stdev= 6.25, samples=20 00:19:05.114 lat (msec) : 50=0.22%, 100=0.54%, 250=98.55%, 500=0.70% 00:19:05.114 cpu : usr=0.96%, sys=1.03%, ctx=3801, majf=0, minf=1 00:19:05.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:19:05.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.114 issued rwts: total=0,3720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.114 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.114 job8: (groupid=0, jobs=1): err= 0: pid=91333: Mon Dec 16 10:06:02 2024 00:19:05.114 write: IOPS=364, BW=91.2MiB/s (95.7MB/s)(926MiB/10144msec); 0 zone resets 00:19:05.115 slat (usec): min=23, max=69600, avg=2696.49, stdev=4724.74 00:19:05.115 clat (msec): min=72, max=314, avg=172.60, stdev=13.00 00:19:05.115 lat (msec): min=72, max=314, avg=175.29, stdev=12.32 00:19:05.115 clat percentiles (msec): 00:19:05.115 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 165], 00:19:05.115 | 30.00th=[ 174], 40.00th=[ 174], 50.00th=[ 174], 60.00th=[ 174], 00:19:05.115 | 70.00th=[ 176], 80.00th=[ 176], 90.00th=[ 178], 95.00th=[ 180], 00:19:05.115 | 99.00th=[ 226], 99.50th=[ 259], 99.90th=[ 305], 99.95th=[ 313], 00:19:05.115 | 99.99th=[ 313] 00:19:05.115 bw ( KiB/s): min=78336, max=96256, per=5.95%, avg=93158.40, stdev=3622.20, samples=20 00:19:05.115 iops : min= 306, max= 376, avg=363.90, stdev=14.15, samples=20 00:19:05.115 lat (msec) : 100=0.24%, 250=99.16%, 500=0.59% 00:19:05.115 cpu : usr=0.86%, sys=1.28%, ctx=3908, majf=0, minf=1 00:19:05.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:19:05.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.115 issued rwts: total=0,3702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.115 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.115 job9: (groupid=0, jobs=1): err= 0: pid=91334: Mon Dec 16 10:06:02 2024 00:19:05.115 write: IOPS=746, BW=187MiB/s (196MB/s)(1880MiB/10076msec); 0 zone resets 00:19:05.115 slat (usec): min=20, max=25709, avg=1325.11, stdev=2278.16 00:19:05.115 clat (msec): min=2, max=181, avg=84.42, stdev=14.14 00:19:05.115 lat (msec): min=2, max=181, avg=85.75, stdev=14.17 00:19:05.115 clat percentiles (msec): 00:19:05.115 | 1.00th=[ 77], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 79], 00:19:05.115 | 30.00th=[ 82], 40.00th=[ 82], 50.00th=[ 82], 60.00th=[ 83], 00:19:05.115 | 70.00th=[ 83], 80.00th=[ 84], 90.00th=[ 85], 95.00th=[ 121], 00:19:05.115 | 99.00th=[ 146], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 182], 00:19:05.115 | 99.99th=[ 182] 00:19:05.115 bw ( KiB/s): min=119296, max=201728, per=12.18%, avg=190848.00, stdev=22804.88, samples=20 00:19:05.115 iops : min= 466, max= 788, avg=745.50, stdev=89.08, samples=20 00:19:05.115 lat (msec) : 4=0.05%, 10=0.05%, 20=0.07%, 50=0.21%, 100=91.11% 00:19:05.115 lat (msec) : 250=8.50% 00:19:05.115 cpu : usr=1.55%, sys=2.02%, ctx=8335, majf=0, minf=1 00:19:05.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:05.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.115 issued rwts: total=0,7518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.115 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.115 job10: (groupid=0, jobs=1): err= 0: pid=91335: Mon Dec 16 10:06:02 2024 00:19:05.115 write: IOPS=724, BW=181MiB/s (190MB/s)(1826MiB/10075msec); 0 zone resets 00:19:05.115 slat (usec): min=16, max=31068, avg=1338.85, stdev=2399.38 00:19:05.115 clat (msec): min=3, max=202, avg=86.93, stdev=23.02 00:19:05.115 lat (msec): min=3, max=206, avg=88.27, stdev=23.29 00:19:05.115 clat percentiles (msec): 00:19:05.115 | 1.00th=[ 36], 5.00th=[ 79], 10.00th=[ 79], 20.00th=[ 80], 00:19:05.115 | 30.00th=[ 83], 40.00th=[ 84], 50.00th=[ 84], 60.00th=[ 84], 00:19:05.115 | 70.00th=[ 85], 80.00th=[ 85], 90.00th=[ 86], 95.00th=[ 165], 00:19:05.115 | 99.00th=[ 180], 99.50th=[ 190], 99.90th=[ 194], 99.95th=[ 197], 00:19:05.115 | 99.99th=[ 203] 00:19:05.115 bw ( KiB/s): min=91648, max=199680, per=11.83%, avg=185318.40, stdev=31681.99, samples=20 00:19:05.115 iops : min= 358, max= 780, avg=723.90, stdev=123.76, samples=20 00:19:05.115 lat (msec) : 4=0.04%, 10=0.07%, 20=0.31%, 50=1.23%, 100=91.96% 00:19:05.115 lat (msec) : 250=6.38% 00:19:05.115 cpu : usr=1.38%, sys=2.10%, ctx=8954, majf=0, minf=1 00:19:05.115 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:05.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:05.115 issued rwts: total=0,7302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.115 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:05.115 00:19:05.115 Run status group 0 (all jobs): 00:19:05.115 WRITE: bw=1530MiB/s (1604MB/s), 91.1MiB/s-209MiB/s (95.5MB/s-220MB/s), io=15.2GiB (16.3GB), run=10068-10152msec 00:19:05.115 00:19:05.115 Disk stats (read/write): 00:19:05.115 nvme0n1: ios=49/14859, merge=0/0, ticks=33/1214114, in_queue=1214147, util=97.69% 00:19:05.115 nvme10n1: ios=49/16694, merge=0/0, ticks=163/1213294, in_queue=1213457, util=98.27% 00:19:05.115 nvme1n1: ios=24/10736, merge=0/0, ticks=32/1212234, in_queue=1212266, util=97.88% 00:19:05.115 nvme2n1: ios=15/7256, merge=0/0, ticks=15/1209014, in_queue=1209029, util=98.02% 00:19:05.115 nvme3n1: ios=0/7662, merge=0/0, ticks=0/1209543, in_queue=1209543, util=97.97% 00:19:05.115 nvme4n1: ios=0/10684, merge=0/0, ticks=0/1213332, in_queue=1213332, util=98.19% 00:19:05.115 nvme5n1: ios=0/10799, merge=0/0, ticks=0/1211408, in_queue=1211408, util=98.32% 00:19:05.115 nvme6n1: ios=0/7300, merge=0/0, ticks=0/1208908, in_queue=1208908, util=98.41% 00:19:05.115 nvme7n1: ios=0/7254, merge=0/0, ticks=0/1208726, in_queue=1208726, util=98.60% 00:19:05.115 nvme8n1: ios=0/14883, merge=0/0, ticks=0/1215293, in_queue=1215293, util=98.90% 00:19:05.115 nvme9n1: ios=0/14457, merge=0/0, ticks=0/1216450, in_queue=1216450, util=98.99% 00:19:05.115 10:06:02 -- target/multiconnection.sh@36 -- # sync 00:19:05.115 10:06:02 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:05.115 10:06:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.115 10:06:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:05.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:05.115 10:06:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:05.115 10:06:02 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.115 10:06:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.115 10:06:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:19:05.115 10:06:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.115 10:06:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:19:05.115 10:06:02 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.115 10:06:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.115 10:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.115 10:06:02 -- common/autotest_common.sh@10 -- # set +x 00:19:05.115 10:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.115 10:06:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.115 10:06:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:05.115 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:05.115 10:06:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:05.115 10:06:02 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.115 10:06:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.115 10:06:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:19:05.115 10:06:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.115 10:06:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:19:05.115 10:06:02 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.115 10:06:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:05.115 10:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.115 10:06:02 -- common/autotest_common.sh@10 -- # set +x 00:19:05.115 10:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.115 10:06:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.115 10:06:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:05.115 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:05.115 10:06:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:05.115 10:06:02 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.115 10:06:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.115 10:06:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:19:05.115 10:06:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.115 10:06:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:19:05.115 10:06:02 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.115 10:06:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:05.115 10:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.115 10:06:02 -- common/autotest_common.sh@10 -- # set +x 00:19:05.115 10:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.115 10:06:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.115 10:06:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:05.115 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:05.115 10:06:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:05.115 10:06:02 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.115 10:06:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.115 10:06:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:19:05.115 10:06:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.115 10:06:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:19:05.115 10:06:02 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.115 10:06:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:05.115 10:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.115 10:06:02 -- common/autotest_common.sh@10 -- # set +x 00:19:05.115 10:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.115 10:06:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.115 10:06:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:05.115 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:05.115 10:06:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:05.115 10:06:02 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.115 10:06:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:19:05.115 10:06:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.115 10:06:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.115 10:06:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:19:05.115 10:06:02 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.115 10:06:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:05.115 10:06:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.115 10:06:02 -- common/autotest_common.sh@10 -- # set +x 00:19:05.115 10:06:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.115 10:06:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.115 10:06:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:05.115 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:05.116 10:06:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:05.116 10:06:02 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.116 10:06:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.116 10:06:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:19:05.116 10:06:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.116 10:06:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:19:05.116 10:06:03 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.116 10:06:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:05.116 10:06:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.116 10:06:03 -- common/autotest_common.sh@10 -- # set +x 00:19:05.116 10:06:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.116 10:06:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.116 10:06:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:05.116 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:05.116 10:06:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:05.116 10:06:03 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.116 10:06:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.116 10:06:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:19:05.116 10:06:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.116 10:06:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:19:05.116 10:06:03 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.116 10:06:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:05.116 10:06:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.116 10:06:03 -- common/autotest_common.sh@10 -- # set +x 00:19:05.116 10:06:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.116 10:06:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.116 10:06:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:05.116 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:05.116 10:06:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:05.116 10:06:03 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.116 10:06:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.116 10:06:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:19:05.116 10:06:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.116 10:06:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:19:05.116 10:06:03 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.116 10:06:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:05.116 10:06:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.116 10:06:03 -- common/autotest_common.sh@10 -- # set +x 00:19:05.116 10:06:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.116 10:06:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.116 10:06:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:05.116 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:05.116 10:06:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:05.116 10:06:03 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.116 10:06:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.116 10:06:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:19:05.116 10:06:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.116 10:06:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:19:05.116 10:06:03 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.116 10:06:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:05.116 10:06:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.116 10:06:03 -- common/autotest_common.sh@10 -- # set +x 00:19:05.116 10:06:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.116 10:06:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.116 10:06:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:05.116 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:05.116 10:06:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:05.116 10:06:03 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.116 10:06:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.116 10:06:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:19:05.116 10:06:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.116 10:06:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:19:05.116 10:06:03 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.116 10:06:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:05.116 10:06:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.116 10:06:03 -- common/autotest_common.sh@10 -- # set +x 00:19:05.116 10:06:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.116 10:06:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.116 10:06:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:05.116 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:05.116 10:06:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:05.116 10:06:03 -- common/autotest_common.sh@1208 -- # local i=0 00:19:05.116 10:06:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:05.116 10:06:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:19:05.116 10:06:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:05.116 10:06:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:19:05.116 10:06:03 -- common/autotest_common.sh@1220 -- # return 0 00:19:05.116 10:06:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:05.116 10:06:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.116 10:06:03 -- common/autotest_common.sh@10 -- # set +x 00:19:05.116 10:06:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.116 10:06:03 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:05.116 10:06:03 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:05.116 10:06:03 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:05.116 10:06:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:05.116 10:06:03 -- nvmf/common.sh@116 -- # sync 00:19:05.116 10:06:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:05.116 10:06:03 -- nvmf/common.sh@119 -- # set +e 00:19:05.116 10:06:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:05.116 10:06:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:05.116 rmmod nvme_tcp 00:19:05.116 rmmod nvme_fabrics 00:19:05.116 rmmod nvme_keyring 00:19:05.116 10:06:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:05.116 10:06:03 -- nvmf/common.sh@123 -- # set -e 00:19:05.116 10:06:03 -- nvmf/common.sh@124 -- # return 0 00:19:05.116 10:06:03 -- nvmf/common.sh@477 -- # '[' -n 90621 ']' 00:19:05.116 10:06:03 -- nvmf/common.sh@478 -- # killprocess 90621 00:19:05.116 10:06:03 -- common/autotest_common.sh@936 -- # '[' -z 90621 ']' 00:19:05.116 10:06:03 -- common/autotest_common.sh@940 -- # kill -0 90621 00:19:05.116 10:06:03 -- common/autotest_common.sh@941 -- # uname 00:19:05.116 10:06:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:05.116 10:06:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90621 00:19:05.116 10:06:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:05.116 10:06:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:05.116 killing process with pid 90621 00:19:05.116 10:06:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90621' 00:19:05.116 10:06:03 -- common/autotest_common.sh@955 -- # kill 90621 00:19:05.116 10:06:03 -- common/autotest_common.sh@960 -- # wait 90621 00:19:05.375 10:06:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:05.375 10:06:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:05.375 10:06:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:05.375 10:06:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:05.375 10:06:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:05.375 10:06:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.375 10:06:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.375 10:06:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.634 10:06:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:05.634 00:19:05.634 real 0m49.545s 00:19:05.634 user 2m42.844s 00:19:05.634 sys 0m28.080s 00:19:05.634 10:06:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:05.634 ************************************ 00:19:05.634 END TEST nvmf_multiconnection 00:19:05.634 ************************************ 00:19:05.634 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:19:05.634 10:06:04 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:05.634 10:06:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:05.634 10:06:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:05.634 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:19:05.634 ************************************ 00:19:05.634 START TEST nvmf_initiator_timeout 00:19:05.634 ************************************ 00:19:05.634 10:06:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:05.634 * Looking for test storage... 00:19:05.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:05.634 10:06:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:05.634 10:06:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:05.634 10:06:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:05.634 10:06:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:05.634 10:06:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:05.634 10:06:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:05.634 10:06:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:05.634 10:06:04 -- scripts/common.sh@335 -- # IFS=.-: 00:19:05.634 10:06:04 -- scripts/common.sh@335 -- # read -ra ver1 00:19:05.634 10:06:04 -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.634 10:06:04 -- scripts/common.sh@336 -- # read -ra ver2 00:19:05.634 10:06:04 -- scripts/common.sh@337 -- # local 'op=<' 00:19:05.634 10:06:04 -- scripts/common.sh@339 -- # ver1_l=2 00:19:05.634 10:06:04 -- scripts/common.sh@340 -- # ver2_l=1 00:19:05.634 10:06:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:05.634 10:06:04 -- scripts/common.sh@343 -- # case "$op" in 00:19:05.634 10:06:04 -- scripts/common.sh@344 -- # : 1 00:19:05.634 10:06:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:05.634 10:06:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.634 10:06:04 -- scripts/common.sh@364 -- # decimal 1 00:19:05.634 10:06:04 -- scripts/common.sh@352 -- # local d=1 00:19:05.634 10:06:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.634 10:06:04 -- scripts/common.sh@354 -- # echo 1 00:19:05.634 10:06:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:05.634 10:06:04 -- scripts/common.sh@365 -- # decimal 2 00:19:05.894 10:06:04 -- scripts/common.sh@352 -- # local d=2 00:19:05.894 10:06:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.894 10:06:04 -- scripts/common.sh@354 -- # echo 2 00:19:05.894 10:06:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:05.894 10:06:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:05.894 10:06:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:05.894 10:06:04 -- scripts/common.sh@367 -- # return 0 00:19:05.894 10:06:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.894 10:06:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:05.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.894 --rc genhtml_branch_coverage=1 00:19:05.894 --rc genhtml_function_coverage=1 00:19:05.894 --rc genhtml_legend=1 00:19:05.894 --rc geninfo_all_blocks=1 00:19:05.894 --rc geninfo_unexecuted_blocks=1 00:19:05.894 00:19:05.894 ' 00:19:05.894 10:06:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:05.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.894 --rc genhtml_branch_coverage=1 00:19:05.894 --rc genhtml_function_coverage=1 00:19:05.894 --rc genhtml_legend=1 00:19:05.894 --rc geninfo_all_blocks=1 00:19:05.894 --rc geninfo_unexecuted_blocks=1 00:19:05.894 00:19:05.894 ' 00:19:05.894 10:06:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:05.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.894 --rc genhtml_branch_coverage=1 00:19:05.894 --rc genhtml_function_coverage=1 00:19:05.894 --rc genhtml_legend=1 00:19:05.894 --rc geninfo_all_blocks=1 00:19:05.894 --rc geninfo_unexecuted_blocks=1 00:19:05.894 00:19:05.894 ' 00:19:05.894 10:06:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:05.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.894 --rc genhtml_branch_coverage=1 00:19:05.894 --rc genhtml_function_coverage=1 00:19:05.894 --rc genhtml_legend=1 00:19:05.894 --rc geninfo_all_blocks=1 00:19:05.894 --rc geninfo_unexecuted_blocks=1 00:19:05.894 00:19:05.894 ' 00:19:05.894 10:06:04 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:05.894 10:06:04 -- nvmf/common.sh@7 -- # uname -s 00:19:05.894 10:06:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.894 10:06:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.894 10:06:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.894 10:06:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.894 10:06:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.894 10:06:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.894 10:06:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.894 10:06:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.894 10:06:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.894 10:06:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.894 10:06:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:19:05.894 10:06:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:19:05.894 10:06:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.894 10:06:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.894 10:06:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:05.894 10:06:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:05.894 10:06:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.894 10:06:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.894 10:06:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.894 10:06:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.894 10:06:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.895 10:06:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.895 10:06:04 -- paths/export.sh@5 -- # export PATH 00:19:05.895 10:06:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.895 10:06:04 -- nvmf/common.sh@46 -- # : 0 00:19:05.895 10:06:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:05.895 10:06:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:05.895 10:06:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:05.895 10:06:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.895 10:06:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.895 10:06:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:05.895 10:06:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:05.895 10:06:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:05.895 10:06:04 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:05.895 10:06:04 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:05.895 10:06:04 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:05.895 10:06:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:05.895 10:06:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.895 10:06:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:05.895 10:06:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:05.895 10:06:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:05.895 10:06:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.895 10:06:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.895 10:06:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.895 10:06:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:05.895 10:06:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:05.895 10:06:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:05.895 10:06:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:05.895 10:06:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:05.895 10:06:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:05.895 10:06:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.895 10:06:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.895 10:06:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:05.895 10:06:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:05.895 10:06:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:05.895 10:06:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:05.895 10:06:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:05.895 10:06:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.895 10:06:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:05.895 10:06:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:05.895 10:06:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:05.895 10:06:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:05.895 10:06:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:05.895 10:06:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:05.895 Cannot find device "nvmf_tgt_br" 00:19:05.895 10:06:04 -- nvmf/common.sh@154 -- # true 00:19:05.895 10:06:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:05.895 Cannot find device "nvmf_tgt_br2" 00:19:05.895 10:06:04 -- nvmf/common.sh@155 -- # true 00:19:05.895 10:06:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:05.895 10:06:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:05.895 Cannot find device "nvmf_tgt_br" 00:19:05.895 10:06:04 -- nvmf/common.sh@157 -- # true 00:19:05.895 10:06:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:05.895 Cannot find device "nvmf_tgt_br2" 00:19:05.895 10:06:04 -- nvmf/common.sh@158 -- # true 00:19:05.895 10:06:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:05.895 10:06:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:05.895 10:06:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:05.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:05.895 10:06:04 -- nvmf/common.sh@161 -- # true 00:19:05.895 10:06:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:05.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:05.895 10:06:04 -- nvmf/common.sh@162 -- # true 00:19:05.895 10:06:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:05.895 10:06:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:05.895 10:06:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:05.895 10:06:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:05.895 10:06:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:05.895 10:06:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:05.895 10:06:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:05.895 10:06:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:05.895 10:06:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:05.895 10:06:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:05.895 10:06:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:05.895 10:06:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:05.895 10:06:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:05.895 10:06:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:06.154 10:06:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:06.155 10:06:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:06.155 10:06:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:06.155 10:06:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:06.155 10:06:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:06.155 10:06:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:06.155 10:06:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:06.155 10:06:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:06.155 10:06:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:06.155 10:06:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:06.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:19:06.155 00:19:06.155 --- 10.0.0.2 ping statistics --- 00:19:06.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.155 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:06.155 10:06:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:06.155 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:06.155 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:19:06.155 00:19:06.155 --- 10.0.0.3 ping statistics --- 00:19:06.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.155 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:06.155 10:06:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:06.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:19:06.155 00:19:06.155 --- 10.0.0.1 ping statistics --- 00:19:06.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.155 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:06.155 10:06:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.155 10:06:04 -- nvmf/common.sh@421 -- # return 0 00:19:06.155 10:06:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:06.155 10:06:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.155 10:06:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:06.155 10:06:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:06.155 10:06:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.155 10:06:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:06.155 10:06:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:06.155 10:06:04 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:06.155 10:06:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:06.155 10:06:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:06.155 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:19:06.155 10:06:04 -- nvmf/common.sh@469 -- # nvmfpid=91707 00:19:06.155 10:06:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:06.155 10:06:04 -- nvmf/common.sh@470 -- # waitforlisten 91707 00:19:06.155 10:06:04 -- common/autotest_common.sh@829 -- # '[' -z 91707 ']' 00:19:06.155 10:06:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.155 10:06:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:06.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.155 10:06:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.155 10:06:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:06.155 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:19:06.155 [2024-12-16 10:06:04.699380] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:06.155 [2024-12-16 10:06:04.699457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.414 [2024-12-16 10:06:04.844879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:06.414 [2024-12-16 10:06:04.908710] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:06.414 [2024-12-16 10:06:04.908880] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.414 [2024-12-16 10:06:04.908895] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.414 [2024-12-16 10:06:04.908908] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.414 [2024-12-16 10:06:04.909090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.414 [2024-12-16 10:06:04.909559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.414 [2024-12-16 10:06:04.910328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.414 [2024-12-16 10:06:04.910402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.347 10:06:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:07.347 10:06:05 -- common/autotest_common.sh@862 -- # return 0 00:19:07.347 10:06:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:07.347 10:06:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:07.347 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:19:07.347 10:06:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.347 10:06:05 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:07.347 10:06:05 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:07.348 10:06:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.348 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:19:07.348 Malloc0 00:19:07.348 10:06:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.348 10:06:05 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:07.348 10:06:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.348 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:19:07.348 Delay0 00:19:07.348 10:06:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.348 10:06:05 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:07.348 10:06:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.348 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:19:07.348 [2024-12-16 10:06:05.741039] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.348 10:06:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.348 10:06:05 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:07.348 10:06:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.348 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:19:07.348 10:06:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.348 10:06:05 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:07.348 10:06:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.348 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:19:07.348 10:06:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.348 10:06:05 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.348 10:06:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.348 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:19:07.348 [2024-12-16 10:06:05.769231] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.348 10:06:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.348 10:06:05 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:07.348 10:06:05 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:07.348 10:06:05 -- common/autotest_common.sh@1187 -- # local i=0 00:19:07.348 10:06:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:07.348 10:06:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:07.348 10:06:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:09.880 10:06:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:09.880 10:06:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:09.880 10:06:07 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:09.880 10:06:07 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:09.880 10:06:07 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.880 10:06:07 -- common/autotest_common.sh@1197 -- # return 0 00:19:09.880 10:06:07 -- target/initiator_timeout.sh@35 -- # fio_pid=91789 00:19:09.880 10:06:07 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:09.880 10:06:07 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:09.880 [global] 00:19:09.880 thread=1 00:19:09.880 invalidate=1 00:19:09.880 rw=write 00:19:09.880 time_based=1 00:19:09.880 runtime=60 00:19:09.880 ioengine=libaio 00:19:09.880 direct=1 00:19:09.880 bs=4096 00:19:09.880 iodepth=1 00:19:09.880 norandommap=0 00:19:09.880 numjobs=1 00:19:09.880 00:19:09.880 verify_dump=1 00:19:09.880 verify_backlog=512 00:19:09.880 verify_state_save=0 00:19:09.880 do_verify=1 00:19:09.880 verify=crc32c-intel 00:19:09.880 [job0] 00:19:09.880 filename=/dev/nvme0n1 00:19:09.880 Could not set queue depth (nvme0n1) 00:19:09.880 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.880 fio-3.35 00:19:09.880 Starting 1 thread 00:19:12.411 10:06:10 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:12.411 10:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.411 10:06:10 -- common/autotest_common.sh@10 -- # set +x 00:19:12.411 true 00:19:12.411 10:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.411 10:06:10 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:12.411 10:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.411 10:06:10 -- common/autotest_common.sh@10 -- # set +x 00:19:12.411 true 00:19:12.411 10:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.411 10:06:10 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:12.411 10:06:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.411 10:06:10 -- common/autotest_common.sh@10 -- # set +x 00:19:12.411 true 00:19:12.411 10:06:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.411 10:06:11 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:12.411 10:06:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.411 10:06:11 -- common/autotest_common.sh@10 -- # set +x 00:19:12.411 true 00:19:12.411 10:06:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.411 10:06:11 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:15.696 10:06:14 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:15.696 10:06:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.696 10:06:14 -- common/autotest_common.sh@10 -- # set +x 00:19:15.696 true 00:19:15.696 10:06:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.696 10:06:14 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:15.696 10:06:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.696 10:06:14 -- common/autotest_common.sh@10 -- # set +x 00:19:15.696 true 00:19:15.696 10:06:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.696 10:06:14 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:15.696 10:06:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.696 10:06:14 -- common/autotest_common.sh@10 -- # set +x 00:19:15.696 true 00:19:15.696 10:06:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.696 10:06:14 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:15.696 10:06:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.696 10:06:14 -- common/autotest_common.sh@10 -- # set +x 00:19:15.696 true 00:19:15.696 10:06:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.696 10:06:14 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:15.696 10:06:14 -- target/initiator_timeout.sh@54 -- # wait 91789 00:20:12.027 00:20:12.027 job0: (groupid=0, jobs=1): err= 0: pid=91810: Mon Dec 16 10:07:08 2024 00:20:12.027 read: IOPS=761, BW=3044KiB/s (3117kB/s)(178MiB/60000msec) 00:20:12.027 slat (nsec): min=11279, max=95121, avg=15434.93, stdev=5775.55 00:20:12.027 clat (usec): min=152, max=1014, avg=214.03, stdev=45.47 00:20:12.027 lat (usec): min=165, max=1040, avg=229.46, stdev=47.78 00:20:12.027 clat percentiles (usec): 00:20:12.027 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 174], 00:20:12.027 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 202], 60.00th=[ 215], 00:20:12.027 | 70.00th=[ 233], 80.00th=[ 255], 90.00th=[ 281], 95.00th=[ 302], 00:20:12.027 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 375], 99.95th=[ 388], 00:20:12.027 | 99.99th=[ 619] 00:20:12.027 write: IOPS=768, BW=3072KiB/s (3146kB/s)(180MiB/60000msec); 0 zone resets 00:20:12.027 slat (usec): min=16, max=17708, avg=23.11, stdev=90.38 00:20:12.027 clat (usec): min=113, max=40556k, avg=1048.39, stdev=188929.69 00:20:12.027 lat (usec): min=135, max=40556k, avg=1071.50, stdev=188929.72 00:20:12.027 clat percentiles (usec): 00:20:12.027 | 1.00th=[ 125], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 137], 00:20:12.027 | 30.00th=[ 141], 40.00th=[ 149], 50.00th=[ 159], 60.00th=[ 169], 00:20:12.027 | 70.00th=[ 182], 80.00th=[ 200], 90.00th=[ 223], 95.00th=[ 241], 00:20:12.027 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 338], 99.95th=[ 383], 00:20:12.027 | 99.99th=[ 758] 00:20:12.027 bw ( KiB/s): min= 968, max=12288, per=100.00%, avg=9241.64, stdev=2292.63, samples=39 00:20:12.027 iops : min= 242, max= 3072, avg=2310.41, stdev=573.16, samples=39 00:20:12.027 lat (usec) : 250=87.52%, 500=12.46%, 750=0.01%, 1000=0.01% 00:20:12.027 lat (msec) : 2=0.01%, >=2000=0.01% 00:20:12.027 cpu : usr=0.55%, sys=2.08%, ctx=91754, majf=0, minf=5 00:20:12.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:12.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.027 issued rwts: total=45666,46080,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:12.027 00:20:12.027 Run status group 0 (all jobs): 00:20:12.027 READ: bw=3044KiB/s (3117kB/s), 3044KiB/s-3044KiB/s (3117kB/s-3117kB/s), io=178MiB (187MB), run=60000-60000msec 00:20:12.027 WRITE: bw=3072KiB/s (3146kB/s), 3072KiB/s-3072KiB/s (3146kB/s-3146kB/s), io=180MiB (189MB), run=60000-60000msec 00:20:12.027 00:20:12.027 Disk stats (read/write): 00:20:12.027 nvme0n1: ios=45788/45644, merge=0/0, ticks=10270/8340, in_queue=18610, util=99.63% 00:20:12.027 10:07:08 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:12.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:12.027 10:07:08 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:12.027 10:07:08 -- common/autotest_common.sh@1208 -- # local i=0 00:20:12.027 10:07:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:12.027 10:07:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:12.027 10:07:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:12.027 10:07:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:12.027 10:07:08 -- common/autotest_common.sh@1220 -- # return 0 00:20:12.027 10:07:08 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:12.027 nvmf hotplug test: fio successful as expected 00:20:12.027 10:07:08 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:12.027 10:07:08 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:12.027 10:07:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.027 10:07:08 -- common/autotest_common.sh@10 -- # set +x 00:20:12.027 10:07:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.027 10:07:08 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:12.027 10:07:08 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:12.027 10:07:08 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:12.027 10:07:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:12.027 10:07:08 -- nvmf/common.sh@116 -- # sync 00:20:12.027 10:07:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:12.027 10:07:08 -- nvmf/common.sh@119 -- # set +e 00:20:12.027 10:07:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:12.027 10:07:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:12.027 rmmod nvme_tcp 00:20:12.027 rmmod nvme_fabrics 00:20:12.027 rmmod nvme_keyring 00:20:12.027 10:07:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:12.027 10:07:08 -- nvmf/common.sh@123 -- # set -e 00:20:12.027 10:07:08 -- nvmf/common.sh@124 -- # return 0 00:20:12.027 10:07:08 -- nvmf/common.sh@477 -- # '[' -n 91707 ']' 00:20:12.027 10:07:08 -- nvmf/common.sh@478 -- # killprocess 91707 00:20:12.027 10:07:08 -- common/autotest_common.sh@936 -- # '[' -z 91707 ']' 00:20:12.027 10:07:08 -- common/autotest_common.sh@940 -- # kill -0 91707 00:20:12.027 10:07:08 -- common/autotest_common.sh@941 -- # uname 00:20:12.027 10:07:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:12.027 10:07:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91707 00:20:12.027 killing process with pid 91707 00:20:12.027 10:07:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:12.027 10:07:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:12.027 10:07:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91707' 00:20:12.027 10:07:08 -- common/autotest_common.sh@955 -- # kill 91707 00:20:12.027 10:07:08 -- common/autotest_common.sh@960 -- # wait 91707 00:20:12.027 10:07:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:12.027 10:07:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:12.027 10:07:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:12.027 10:07:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.027 10:07:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:12.027 10:07:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.027 10:07:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.027 10:07:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.027 10:07:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:12.027 00:20:12.027 real 1m4.754s 00:20:12.027 user 4m5.012s 00:20:12.027 sys 0m10.274s 00:20:12.027 10:07:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:12.027 10:07:08 -- common/autotest_common.sh@10 -- # set +x 00:20:12.027 ************************************ 00:20:12.027 END TEST nvmf_initiator_timeout 00:20:12.027 ************************************ 00:20:12.027 10:07:08 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:12.027 10:07:08 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:12.027 10:07:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.027 10:07:08 -- common/autotest_common.sh@10 -- # set +x 00:20:12.027 10:07:08 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:12.027 10:07:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.027 10:07:08 -- common/autotest_common.sh@10 -- # set +x 00:20:12.027 10:07:08 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:12.027 10:07:08 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:12.027 10:07:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:12.028 10:07:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:12.028 10:07:08 -- common/autotest_common.sh@10 -- # set +x 00:20:12.028 ************************************ 00:20:12.028 START TEST nvmf_multicontroller 00:20:12.028 ************************************ 00:20:12.028 10:07:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:12.028 * Looking for test storage... 00:20:12.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:12.028 10:07:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:12.028 10:07:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:12.028 10:07:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:12.028 10:07:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:12.028 10:07:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:12.028 10:07:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:12.028 10:07:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:12.028 10:07:09 -- scripts/common.sh@335 -- # IFS=.-: 00:20:12.028 10:07:09 -- scripts/common.sh@335 -- # read -ra ver1 00:20:12.028 10:07:09 -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.028 10:07:09 -- scripts/common.sh@336 -- # read -ra ver2 00:20:12.028 10:07:09 -- scripts/common.sh@337 -- # local 'op=<' 00:20:12.028 10:07:09 -- scripts/common.sh@339 -- # ver1_l=2 00:20:12.028 10:07:09 -- scripts/common.sh@340 -- # ver2_l=1 00:20:12.028 10:07:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:12.028 10:07:09 -- scripts/common.sh@343 -- # case "$op" in 00:20:12.028 10:07:09 -- scripts/common.sh@344 -- # : 1 00:20:12.028 10:07:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:12.028 10:07:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.028 10:07:09 -- scripts/common.sh@364 -- # decimal 1 00:20:12.028 10:07:09 -- scripts/common.sh@352 -- # local d=1 00:20:12.028 10:07:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.028 10:07:09 -- scripts/common.sh@354 -- # echo 1 00:20:12.028 10:07:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:12.028 10:07:09 -- scripts/common.sh@365 -- # decimal 2 00:20:12.028 10:07:09 -- scripts/common.sh@352 -- # local d=2 00:20:12.028 10:07:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:12.028 10:07:09 -- scripts/common.sh@354 -- # echo 2 00:20:12.028 10:07:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:12.028 10:07:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:12.028 10:07:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:12.028 10:07:09 -- scripts/common.sh@367 -- # return 0 00:20:12.028 10:07:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:12.028 10:07:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:12.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.028 --rc genhtml_branch_coverage=1 00:20:12.028 --rc genhtml_function_coverage=1 00:20:12.028 --rc genhtml_legend=1 00:20:12.028 --rc geninfo_all_blocks=1 00:20:12.028 --rc geninfo_unexecuted_blocks=1 00:20:12.028 00:20:12.028 ' 00:20:12.028 10:07:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:12.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.028 --rc genhtml_branch_coverage=1 00:20:12.028 --rc genhtml_function_coverage=1 00:20:12.028 --rc genhtml_legend=1 00:20:12.028 --rc geninfo_all_blocks=1 00:20:12.028 --rc geninfo_unexecuted_blocks=1 00:20:12.028 00:20:12.028 ' 00:20:12.028 10:07:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:12.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.028 --rc genhtml_branch_coverage=1 00:20:12.028 --rc genhtml_function_coverage=1 00:20:12.028 --rc genhtml_legend=1 00:20:12.028 --rc geninfo_all_blocks=1 00:20:12.028 --rc geninfo_unexecuted_blocks=1 00:20:12.028 00:20:12.028 ' 00:20:12.028 10:07:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:12.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.028 --rc genhtml_branch_coverage=1 00:20:12.028 --rc genhtml_function_coverage=1 00:20:12.028 --rc genhtml_legend=1 00:20:12.028 --rc geninfo_all_blocks=1 00:20:12.028 --rc geninfo_unexecuted_blocks=1 00:20:12.028 00:20:12.028 ' 00:20:12.028 10:07:09 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:12.028 10:07:09 -- nvmf/common.sh@7 -- # uname -s 00:20:12.028 10:07:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.028 10:07:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.028 10:07:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.028 10:07:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.028 10:07:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.028 10:07:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.028 10:07:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.028 10:07:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.028 10:07:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.028 10:07:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.028 10:07:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:20:12.028 10:07:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:20:12.028 10:07:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.028 10:07:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.028 10:07:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:12.028 10:07:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:12.028 10:07:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.028 10:07:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.028 10:07:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.028 10:07:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.028 10:07:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.028 10:07:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.028 10:07:09 -- paths/export.sh@5 -- # export PATH 00:20:12.028 10:07:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.028 10:07:09 -- nvmf/common.sh@46 -- # : 0 00:20:12.028 10:07:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:12.028 10:07:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:12.028 10:07:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:12.028 10:07:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.028 10:07:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.028 10:07:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:12.028 10:07:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:12.028 10:07:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:12.028 10:07:09 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:12.028 10:07:09 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:12.028 10:07:09 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:12.028 10:07:09 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:12.028 10:07:09 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.028 10:07:09 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:12.028 10:07:09 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:12.028 10:07:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:12.028 10:07:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.028 10:07:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:12.028 10:07:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:12.028 10:07:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:12.028 10:07:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.028 10:07:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.028 10:07:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.028 10:07:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:12.028 10:07:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:12.028 10:07:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:12.028 10:07:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:12.028 10:07:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:12.028 10:07:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:12.028 10:07:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.028 10:07:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.028 10:07:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:12.028 10:07:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:12.028 10:07:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:12.028 10:07:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:12.028 10:07:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:12.028 10:07:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.028 10:07:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:12.028 10:07:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:12.028 10:07:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:12.028 10:07:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:12.028 10:07:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:12.028 10:07:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:12.028 Cannot find device "nvmf_tgt_br" 00:20:12.028 10:07:09 -- nvmf/common.sh@154 -- # true 00:20:12.028 10:07:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:12.028 Cannot find device "nvmf_tgt_br2" 00:20:12.028 10:07:09 -- nvmf/common.sh@155 -- # true 00:20:12.029 10:07:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:12.029 10:07:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:12.029 Cannot find device "nvmf_tgt_br" 00:20:12.029 10:07:09 -- nvmf/common.sh@157 -- # true 00:20:12.029 10:07:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:12.029 Cannot find device "nvmf_tgt_br2" 00:20:12.029 10:07:09 -- nvmf/common.sh@158 -- # true 00:20:12.029 10:07:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:12.029 10:07:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:12.029 10:07:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:12.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.029 10:07:09 -- nvmf/common.sh@161 -- # true 00:20:12.029 10:07:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:12.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.029 10:07:09 -- nvmf/common.sh@162 -- # true 00:20:12.029 10:07:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:12.029 10:07:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:12.029 10:07:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:12.029 10:07:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:12.029 10:07:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:12.029 10:07:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:12.029 10:07:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:12.029 10:07:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:12.029 10:07:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:12.029 10:07:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:12.029 10:07:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:12.029 10:07:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:12.029 10:07:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:12.029 10:07:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:12.029 10:07:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:12.029 10:07:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:12.029 10:07:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:12.029 10:07:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:12.029 10:07:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:12.029 10:07:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:12.029 10:07:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:12.029 10:07:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:12.029 10:07:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:12.029 10:07:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:12.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:20:12.029 00:20:12.029 --- 10.0.0.2 ping statistics --- 00:20:12.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.029 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:12.029 10:07:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:12.029 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:12.029 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:20:12.029 00:20:12.029 --- 10.0.0.3 ping statistics --- 00:20:12.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.029 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:12.029 10:07:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:12.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:12.029 00:20:12.029 --- 10.0.0.1 ping statistics --- 00:20:12.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.029 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:12.029 10:07:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.029 10:07:09 -- nvmf/common.sh@421 -- # return 0 00:20:12.029 10:07:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:12.029 10:07:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.029 10:07:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:12.029 10:07:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:12.029 10:07:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.029 10:07:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:12.029 10:07:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:12.029 10:07:09 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:12.029 10:07:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:12.029 10:07:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.029 10:07:09 -- common/autotest_common.sh@10 -- # set +x 00:20:12.029 10:07:09 -- nvmf/common.sh@469 -- # nvmfpid=92651 00:20:12.029 10:07:09 -- nvmf/common.sh@470 -- # waitforlisten 92651 00:20:12.029 10:07:09 -- common/autotest_common.sh@829 -- # '[' -z 92651 ']' 00:20:12.029 10:07:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:12.029 10:07:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.029 10:07:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.029 10:07:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.029 10:07:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.029 10:07:09 -- common/autotest_common.sh@10 -- # set +x 00:20:12.029 [2024-12-16 10:07:09.496789] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:12.029 [2024-12-16 10:07:09.497396] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.029 [2024-12-16 10:07:09.638802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:12.029 [2024-12-16 10:07:09.710145] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:12.029 [2024-12-16 10:07:09.710284] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.029 [2024-12-16 10:07:09.710298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.029 [2024-12-16 10:07:09.710318] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.029 [2024-12-16 10:07:09.710503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.029 [2024-12-16 10:07:09.711008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:12.029 [2024-12-16 10:07:09.711051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.029 10:07:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.029 10:07:10 -- common/autotest_common.sh@862 -- # return 0 00:20:12.029 10:07:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:12.029 10:07:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.029 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.029 10:07:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.029 10:07:10 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:12.029 10:07:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.029 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.029 [2024-12-16 10:07:10.549421] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.029 10:07:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.029 10:07:10 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:12.029 10:07:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.029 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.029 Malloc0 00:20:12.029 10:07:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.029 10:07:10 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:12.029 10:07:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.029 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.029 10:07:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.029 10:07:10 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:12.029 10:07:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.029 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.029 10:07:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.029 10:07:10 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:12.029 10:07:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.029 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.029 [2024-12-16 10:07:10.623194] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.029 10:07:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.029 10:07:10 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:12.029 10:07:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.029 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.029 [2024-12-16 10:07:10.631092] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:12.029 10:07:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.029 10:07:10 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:12.029 10:07:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.029 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.289 Malloc1 00:20:12.289 10:07:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.289 10:07:10 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:12.289 10:07:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.289 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.289 10:07:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.289 10:07:10 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:12.289 10:07:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.289 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.289 10:07:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.289 10:07:10 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:12.289 10:07:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.289 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.289 10:07:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.289 10:07:10 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:12.289 10:07:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.289 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:20:12.289 10:07:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.289 10:07:10 -- host/multicontroller.sh@44 -- # bdevperf_pid=92703 00:20:12.289 10:07:10 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:12.289 10:07:10 -- host/multicontroller.sh@47 -- # waitforlisten 92703 /var/tmp/bdevperf.sock 00:20:12.289 10:07:10 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:12.289 10:07:10 -- common/autotest_common.sh@829 -- # '[' -z 92703 ']' 00:20:12.289 10:07:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.289 10:07:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.289 10:07:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.289 10:07:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.289 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:20:13.225 10:07:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.225 10:07:11 -- common/autotest_common.sh@862 -- # return 0 00:20:13.225 10:07:11 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:13.225 10:07:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.225 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:20:13.484 NVMe0n1 00:20:13.484 10:07:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.484 10:07:11 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:13.484 10:07:11 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:13.484 10:07:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.484 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:20:13.484 10:07:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.484 1 00:20:13.484 10:07:11 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:13.484 10:07:11 -- common/autotest_common.sh@650 -- # local es=0 00:20:13.484 10:07:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:13.484 10:07:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:13.484 10:07:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.484 10:07:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:13.484 10:07:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.484 10:07:11 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:13.484 10:07:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.484 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:20:13.484 2024/12/16 10:07:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:13.484 request: 00:20:13.484 { 00:20:13.484 "method": "bdev_nvme_attach_controller", 00:20:13.484 "params": { 00:20:13.484 "name": "NVMe0", 00:20:13.484 "trtype": "tcp", 00:20:13.484 "traddr": "10.0.0.2", 00:20:13.484 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:13.484 "hostaddr": "10.0.0.2", 00:20:13.484 "hostsvcid": "60000", 00:20:13.484 "adrfam": "ipv4", 00:20:13.484 "trsvcid": "4420", 00:20:13.484 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:13.484 } 00:20:13.484 } 00:20:13.485 Got JSON-RPC error response 00:20:13.485 GoRPCClient: error on JSON-RPC call 00:20:13.485 10:07:11 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:13.485 10:07:11 -- common/autotest_common.sh@653 -- # es=1 00:20:13.485 10:07:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:13.485 10:07:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:13.485 10:07:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:13.485 10:07:11 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:13.485 10:07:11 -- common/autotest_common.sh@650 -- # local es=0 00:20:13.485 10:07:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:13.485 10:07:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:13.485 10:07:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.485 10:07:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:13.485 10:07:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.485 10:07:11 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:13.485 10:07:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.485 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:20:13.485 2024/12/16 10:07:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:13.485 request: 00:20:13.485 { 00:20:13.485 "method": "bdev_nvme_attach_controller", 00:20:13.485 "params": { 00:20:13.485 "name": "NVMe0", 00:20:13.485 "trtype": "tcp", 00:20:13.485 "traddr": "10.0.0.2", 00:20:13.485 "hostaddr": "10.0.0.2", 00:20:13.485 "hostsvcid": "60000", 00:20:13.485 "adrfam": "ipv4", 00:20:13.485 "trsvcid": "4420", 00:20:13.485 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:13.485 } 00:20:13.485 } 00:20:13.485 Got JSON-RPC error response 00:20:13.485 GoRPCClient: error on JSON-RPC call 00:20:13.485 10:07:11 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:13.485 10:07:11 -- common/autotest_common.sh@653 -- # es=1 00:20:13.485 10:07:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:13.485 10:07:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:13.485 10:07:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:13.485 10:07:11 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:13.485 10:07:11 -- common/autotest_common.sh@650 -- # local es=0 00:20:13.485 10:07:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:13.485 10:07:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:13.485 10:07:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.485 10:07:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:13.485 10:07:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.485 10:07:11 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:13.485 10:07:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.485 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:20:13.485 2024/12/16 10:07:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:13.485 request: 00:20:13.485 { 00:20:13.485 "method": "bdev_nvme_attach_controller", 00:20:13.485 "params": { 00:20:13.485 "name": "NVMe0", 00:20:13.485 "trtype": "tcp", 00:20:13.485 "traddr": "10.0.0.2", 00:20:13.485 "hostaddr": "10.0.0.2", 00:20:13.485 "hostsvcid": "60000", 00:20:13.485 "adrfam": "ipv4", 00:20:13.485 "trsvcid": "4420", 00:20:13.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.485 "multipath": "disable" 00:20:13.485 } 00:20:13.485 } 00:20:13.485 Got JSON-RPC error response 00:20:13.485 GoRPCClient: error on JSON-RPC call 00:20:13.485 10:07:11 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:13.485 10:07:11 -- common/autotest_common.sh@653 -- # es=1 00:20:13.485 10:07:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:13.485 10:07:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:13.485 10:07:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:13.485 10:07:11 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:13.485 10:07:11 -- common/autotest_common.sh@650 -- # local es=0 00:20:13.485 10:07:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:13.485 10:07:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:13.485 10:07:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.485 10:07:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:13.485 10:07:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:13.485 10:07:11 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:13.485 10:07:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.485 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:20:13.485 2024/12/16 10:07:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:13.485 request: 00:20:13.485 { 00:20:13.485 "method": "bdev_nvme_attach_controller", 00:20:13.485 "params": { 00:20:13.485 "name": "NVMe0", 00:20:13.485 "trtype": "tcp", 00:20:13.485 "traddr": "10.0.0.2", 00:20:13.485 "hostaddr": "10.0.0.2", 00:20:13.485 "hostsvcid": "60000", 00:20:13.485 "adrfam": "ipv4", 00:20:13.485 "trsvcid": "4420", 00:20:13.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.485 "multipath": "failover" 00:20:13.485 } 00:20:13.485 } 00:20:13.485 Got JSON-RPC error response 00:20:13.485 GoRPCClient: error on JSON-RPC call 00:20:13.485 10:07:11 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:13.485 10:07:11 -- common/autotest_common.sh@653 -- # es=1 00:20:13.485 10:07:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:13.485 10:07:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:13.485 10:07:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:13.485 10:07:11 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:13.485 10:07:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.485 10:07:11 -- common/autotest_common.sh@10 -- # set +x 00:20:13.485 00:20:13.485 10:07:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.485 10:07:12 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:13.485 10:07:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.485 10:07:12 -- common/autotest_common.sh@10 -- # set +x 00:20:13.485 10:07:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.485 10:07:12 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:13.485 10:07:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.485 10:07:12 -- common/autotest_common.sh@10 -- # set +x 00:20:13.485 00:20:13.485 10:07:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.485 10:07:12 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:13.485 10:07:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.485 10:07:12 -- common/autotest_common.sh@10 -- # set +x 00:20:13.485 10:07:12 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:13.485 10:07:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.485 10:07:12 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:13.485 10:07:12 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:14.863 0 00:20:14.863 10:07:13 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:14.863 10:07:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.863 10:07:13 -- common/autotest_common.sh@10 -- # set +x 00:20:14.863 10:07:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.863 10:07:13 -- host/multicontroller.sh@100 -- # killprocess 92703 00:20:14.863 10:07:13 -- common/autotest_common.sh@936 -- # '[' -z 92703 ']' 00:20:14.863 10:07:13 -- common/autotest_common.sh@940 -- # kill -0 92703 00:20:14.863 10:07:13 -- common/autotest_common.sh@941 -- # uname 00:20:14.863 10:07:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:14.863 10:07:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92703 00:20:14.863 10:07:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:14.863 10:07:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:14.863 killing process with pid 92703 00:20:14.863 10:07:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92703' 00:20:14.863 10:07:13 -- common/autotest_common.sh@955 -- # kill 92703 00:20:14.863 10:07:13 -- common/autotest_common.sh@960 -- # wait 92703 00:20:15.123 10:07:13 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.123 10:07:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.123 10:07:13 -- common/autotest_common.sh@10 -- # set +x 00:20:15.123 10:07:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.123 10:07:13 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:15.123 10:07:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.123 10:07:13 -- common/autotest_common.sh@10 -- # set +x 00:20:15.123 10:07:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.123 10:07:13 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:15.123 10:07:13 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:15.123 10:07:13 -- common/autotest_common.sh@1607 -- # read -r file 00:20:15.123 10:07:13 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:15.123 10:07:13 -- common/autotest_common.sh@1606 -- # sort -u 00:20:15.123 10:07:13 -- common/autotest_common.sh@1608 -- # cat 00:20:15.123 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:15.123 [2024-12-16 10:07:10.738525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:15.123 [2024-12-16 10:07:10.738634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92703 ] 00:20:15.123 [2024-12-16 10:07:10.871620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.123 [2024-12-16 10:07:10.948290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.123 [2024-12-16 10:07:12.079277] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name d739cfc1-e923-4fe4-aaa3-eff420b4be8d already exists 00:20:15.123 [2024-12-16 10:07:12.079337] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:d739cfc1-e923-4fe4-aaa3-eff420b4be8d alias for bdev NVMe1n1 00:20:15.123 [2024-12-16 10:07:12.079384] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:15.123 Running I/O for 1 seconds... 00:20:15.123 00:20:15.123 Latency(us) 00:20:15.123 [2024-12-16T10:07:13.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.123 [2024-12-16T10:07:13.748Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:15.123 NVMe0n1 : 1.01 23070.93 90.12 0.00 0.00 5533.92 3068.28 11260.28 00:20:15.123 [2024-12-16T10:07:13.748Z] =================================================================================================================== 00:20:15.123 [2024-12-16T10:07:13.748Z] Total : 23070.93 90.12 0.00 0.00 5533.92 3068.28 11260.28 00:20:15.123 Received shutdown signal, test time was about 1.000000 seconds 00:20:15.123 00:20:15.123 Latency(us) 00:20:15.123 [2024-12-16T10:07:13.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.123 [2024-12-16T10:07:13.748Z] =================================================================================================================== 00:20:15.123 [2024-12-16T10:07:13.748Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.123 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:15.123 10:07:13 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:15.123 10:07:13 -- common/autotest_common.sh@1607 -- # read -r file 00:20:15.123 10:07:13 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:15.123 10:07:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:15.123 10:07:13 -- nvmf/common.sh@116 -- # sync 00:20:15.123 10:07:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:15.123 10:07:13 -- nvmf/common.sh@119 -- # set +e 00:20:15.123 10:07:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:15.123 10:07:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:15.123 rmmod nvme_tcp 00:20:15.123 rmmod nvme_fabrics 00:20:15.123 rmmod nvme_keyring 00:20:15.123 10:07:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:15.123 10:07:13 -- nvmf/common.sh@123 -- # set -e 00:20:15.123 10:07:13 -- nvmf/common.sh@124 -- # return 0 00:20:15.123 10:07:13 -- nvmf/common.sh@477 -- # '[' -n 92651 ']' 00:20:15.123 10:07:13 -- nvmf/common.sh@478 -- # killprocess 92651 00:20:15.123 10:07:13 -- common/autotest_common.sh@936 -- # '[' -z 92651 ']' 00:20:15.123 10:07:13 -- common/autotest_common.sh@940 -- # kill -0 92651 00:20:15.123 10:07:13 -- common/autotest_common.sh@941 -- # uname 00:20:15.123 10:07:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:15.123 10:07:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92651 00:20:15.123 10:07:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:15.123 killing process with pid 92651 00:20:15.123 10:07:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:15.123 10:07:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92651' 00:20:15.123 10:07:13 -- common/autotest_common.sh@955 -- # kill 92651 00:20:15.123 10:07:13 -- common/autotest_common.sh@960 -- # wait 92651 00:20:15.382 10:07:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:15.382 10:07:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:15.382 10:07:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:15.382 10:07:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.382 10:07:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:15.382 10:07:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.382 10:07:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.382 10:07:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.382 10:07:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:15.382 00:20:15.382 real 0m5.035s 00:20:15.382 user 0m15.909s 00:20:15.382 sys 0m1.143s 00:20:15.382 10:07:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:15.382 10:07:13 -- common/autotest_common.sh@10 -- # set +x 00:20:15.382 ************************************ 00:20:15.382 END TEST nvmf_multicontroller 00:20:15.382 ************************************ 00:20:15.642 10:07:14 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:15.642 10:07:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:15.642 10:07:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:15.642 10:07:14 -- common/autotest_common.sh@10 -- # set +x 00:20:15.642 ************************************ 00:20:15.642 START TEST nvmf_aer 00:20:15.642 ************************************ 00:20:15.642 10:07:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:15.642 * Looking for test storage... 00:20:15.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:15.642 10:07:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:15.642 10:07:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:15.642 10:07:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:15.642 10:07:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:15.642 10:07:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:15.642 10:07:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:15.642 10:07:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:15.642 10:07:14 -- scripts/common.sh@335 -- # IFS=.-: 00:20:15.642 10:07:14 -- scripts/common.sh@335 -- # read -ra ver1 00:20:15.642 10:07:14 -- scripts/common.sh@336 -- # IFS=.-: 00:20:15.642 10:07:14 -- scripts/common.sh@336 -- # read -ra ver2 00:20:15.642 10:07:14 -- scripts/common.sh@337 -- # local 'op=<' 00:20:15.642 10:07:14 -- scripts/common.sh@339 -- # ver1_l=2 00:20:15.642 10:07:14 -- scripts/common.sh@340 -- # ver2_l=1 00:20:15.642 10:07:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:15.642 10:07:14 -- scripts/common.sh@343 -- # case "$op" in 00:20:15.642 10:07:14 -- scripts/common.sh@344 -- # : 1 00:20:15.642 10:07:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:15.642 10:07:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.642 10:07:14 -- scripts/common.sh@364 -- # decimal 1 00:20:15.642 10:07:14 -- scripts/common.sh@352 -- # local d=1 00:20:15.642 10:07:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:15.642 10:07:14 -- scripts/common.sh@354 -- # echo 1 00:20:15.642 10:07:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:15.642 10:07:14 -- scripts/common.sh@365 -- # decimal 2 00:20:15.642 10:07:14 -- scripts/common.sh@352 -- # local d=2 00:20:15.642 10:07:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:15.642 10:07:14 -- scripts/common.sh@354 -- # echo 2 00:20:15.642 10:07:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:15.642 10:07:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:15.642 10:07:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:15.642 10:07:14 -- scripts/common.sh@367 -- # return 0 00:20:15.642 10:07:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:15.642 10:07:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:15.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.642 --rc genhtml_branch_coverage=1 00:20:15.642 --rc genhtml_function_coverage=1 00:20:15.642 --rc genhtml_legend=1 00:20:15.642 --rc geninfo_all_blocks=1 00:20:15.642 --rc geninfo_unexecuted_blocks=1 00:20:15.642 00:20:15.642 ' 00:20:15.642 10:07:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:15.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.642 --rc genhtml_branch_coverage=1 00:20:15.642 --rc genhtml_function_coverage=1 00:20:15.642 --rc genhtml_legend=1 00:20:15.642 --rc geninfo_all_blocks=1 00:20:15.642 --rc geninfo_unexecuted_blocks=1 00:20:15.642 00:20:15.642 ' 00:20:15.642 10:07:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:15.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.642 --rc genhtml_branch_coverage=1 00:20:15.642 --rc genhtml_function_coverage=1 00:20:15.642 --rc genhtml_legend=1 00:20:15.642 --rc geninfo_all_blocks=1 00:20:15.642 --rc geninfo_unexecuted_blocks=1 00:20:15.642 00:20:15.642 ' 00:20:15.642 10:07:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:15.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.642 --rc genhtml_branch_coverage=1 00:20:15.642 --rc genhtml_function_coverage=1 00:20:15.642 --rc genhtml_legend=1 00:20:15.642 --rc geninfo_all_blocks=1 00:20:15.642 --rc geninfo_unexecuted_blocks=1 00:20:15.642 00:20:15.642 ' 00:20:15.642 10:07:14 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:15.642 10:07:14 -- nvmf/common.sh@7 -- # uname -s 00:20:15.642 10:07:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.642 10:07:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.642 10:07:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.642 10:07:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.642 10:07:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.642 10:07:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.642 10:07:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.642 10:07:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.642 10:07:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.642 10:07:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.642 10:07:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:20:15.642 10:07:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:20:15.642 10:07:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.642 10:07:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.642 10:07:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:15.643 10:07:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:15.643 10:07:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.643 10:07:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.643 10:07:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.643 10:07:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.643 10:07:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.643 10:07:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.643 10:07:14 -- paths/export.sh@5 -- # export PATH 00:20:15.643 10:07:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.643 10:07:14 -- nvmf/common.sh@46 -- # : 0 00:20:15.643 10:07:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:15.643 10:07:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:15.643 10:07:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:15.643 10:07:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.643 10:07:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.643 10:07:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:15.643 10:07:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:15.643 10:07:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:15.643 10:07:14 -- host/aer.sh@11 -- # nvmftestinit 00:20:15.643 10:07:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:15.643 10:07:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.643 10:07:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:15.643 10:07:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:15.643 10:07:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:15.643 10:07:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.643 10:07:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.643 10:07:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.643 10:07:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:15.643 10:07:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:15.643 10:07:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:15.643 10:07:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:15.643 10:07:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:15.643 10:07:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:15.643 10:07:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.643 10:07:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.643 10:07:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:15.643 10:07:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:15.643 10:07:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:15.643 10:07:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:15.643 10:07:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:15.643 10:07:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.643 10:07:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:15.643 10:07:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:15.643 10:07:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:15.643 10:07:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:15.643 10:07:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:15.643 10:07:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:15.643 Cannot find device "nvmf_tgt_br" 00:20:15.643 10:07:14 -- nvmf/common.sh@154 -- # true 00:20:15.643 10:07:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:15.643 Cannot find device "nvmf_tgt_br2" 00:20:15.643 10:07:14 -- nvmf/common.sh@155 -- # true 00:20:15.643 10:07:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:15.643 10:07:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:15.643 Cannot find device "nvmf_tgt_br" 00:20:15.643 10:07:14 -- nvmf/common.sh@157 -- # true 00:20:15.643 10:07:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:15.902 Cannot find device "nvmf_tgt_br2" 00:20:15.902 10:07:14 -- nvmf/common.sh@158 -- # true 00:20:15.902 10:07:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:15.902 10:07:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:15.902 10:07:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:15.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.902 10:07:14 -- nvmf/common.sh@161 -- # true 00:20:15.902 10:07:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:15.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.902 10:07:14 -- nvmf/common.sh@162 -- # true 00:20:15.902 10:07:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:15.902 10:07:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:15.902 10:07:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:15.902 10:07:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:15.902 10:07:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:15.902 10:07:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:15.902 10:07:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:15.902 10:07:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:15.902 10:07:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:15.902 10:07:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:15.902 10:07:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:15.902 10:07:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:15.902 10:07:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:15.902 10:07:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:15.903 10:07:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:15.903 10:07:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:15.903 10:07:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:15.903 10:07:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:15.903 10:07:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:15.903 10:07:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:15.903 10:07:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:15.903 10:07:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:15.903 10:07:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:15.903 10:07:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:15.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:20:15.903 00:20:15.903 --- 10.0.0.2 ping statistics --- 00:20:15.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.903 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:15.903 10:07:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:15.903 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:15.903 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:20:15.903 00:20:15.903 --- 10.0.0.3 ping statistics --- 00:20:15.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.903 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:15.903 10:07:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:15.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:15.903 00:20:15.903 --- 10.0.0.1 ping statistics --- 00:20:15.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.903 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:15.903 10:07:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.903 10:07:14 -- nvmf/common.sh@421 -- # return 0 00:20:15.903 10:07:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:15.903 10:07:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.903 10:07:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:15.903 10:07:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:15.903 10:07:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.903 10:07:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:15.903 10:07:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:15.903 10:07:14 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:15.903 10:07:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:15.903 10:07:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:15.903 10:07:14 -- common/autotest_common.sh@10 -- # set +x 00:20:15.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.903 10:07:14 -- nvmf/common.sh@469 -- # nvmfpid=92967 00:20:15.903 10:07:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:15.903 10:07:14 -- nvmf/common.sh@470 -- # waitforlisten 92967 00:20:15.903 10:07:14 -- common/autotest_common.sh@829 -- # '[' -z 92967 ']' 00:20:15.903 10:07:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.903 10:07:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.903 10:07:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.903 10:07:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.903 10:07:14 -- common/autotest_common.sh@10 -- # set +x 00:20:16.162 [2024-12-16 10:07:14.574749] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:16.162 [2024-12-16 10:07:14.575008] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.162 [2024-12-16 10:07:14.716505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:16.162 [2024-12-16 10:07:14.784726] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:16.162 [2024-12-16 10:07:14.785216] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.162 [2024-12-16 10:07:14.785293] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.162 [2024-12-16 10:07:14.785522] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.162 [2024-12-16 10:07:14.785673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.162 [2024-12-16 10:07:14.785822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.421 [2024-12-16 10:07:14.785955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:16.421 [2024-12-16 10:07:14.785958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.988 10:07:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.988 10:07:15 -- common/autotest_common.sh@862 -- # return 0 00:20:16.988 10:07:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:16.988 10:07:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:16.988 10:07:15 -- common/autotest_common.sh@10 -- # set +x 00:20:16.988 10:07:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.988 10:07:15 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:16.988 10:07:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.988 10:07:15 -- common/autotest_common.sh@10 -- # set +x 00:20:16.988 [2024-12-16 10:07:15.589456] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.988 10:07:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.988 10:07:15 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:16.988 10:07:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.988 10:07:15 -- common/autotest_common.sh@10 -- # set +x 00:20:17.247 Malloc0 00:20:17.247 10:07:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.247 10:07:15 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:17.247 10:07:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.247 10:07:15 -- common/autotest_common.sh@10 -- # set +x 00:20:17.247 10:07:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.247 10:07:15 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:17.247 10:07:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.247 10:07:15 -- common/autotest_common.sh@10 -- # set +x 00:20:17.247 10:07:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.247 10:07:15 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:17.247 10:07:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.247 10:07:15 -- common/autotest_common.sh@10 -- # set +x 00:20:17.247 [2024-12-16 10:07:15.655807] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.247 10:07:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.247 10:07:15 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:17.247 10:07:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.247 10:07:15 -- common/autotest_common.sh@10 -- # set +x 00:20:17.247 [2024-12-16 10:07:15.663538] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:17.247 [ 00:20:17.247 { 00:20:17.247 "allow_any_host": true, 00:20:17.247 "hosts": [], 00:20:17.247 "listen_addresses": [], 00:20:17.247 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:17.247 "subtype": "Discovery" 00:20:17.247 }, 00:20:17.247 { 00:20:17.247 "allow_any_host": true, 00:20:17.247 "hosts": [], 00:20:17.247 "listen_addresses": [ 00:20:17.247 { 00:20:17.247 "adrfam": "IPv4", 00:20:17.247 "traddr": "10.0.0.2", 00:20:17.247 "transport": "TCP", 00:20:17.247 "trsvcid": "4420", 00:20:17.247 "trtype": "TCP" 00:20:17.247 } 00:20:17.247 ], 00:20:17.247 "max_cntlid": 65519, 00:20:17.247 "max_namespaces": 2, 00:20:17.247 "min_cntlid": 1, 00:20:17.247 "model_number": "SPDK bdev Controller", 00:20:17.247 "namespaces": [ 00:20:17.247 { 00:20:17.247 "bdev_name": "Malloc0", 00:20:17.247 "name": "Malloc0", 00:20:17.247 "nguid": "EA64DE89D7C54CA4B1396FE6EF237997", 00:20:17.247 "nsid": 1, 00:20:17.247 "uuid": "ea64de89-d7c5-4ca4-b139-6fe6ef237997" 00:20:17.247 } 00:20:17.247 ], 00:20:17.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.247 "serial_number": "SPDK00000000000001", 00:20:17.247 "subtype": "NVMe" 00:20:17.247 } 00:20:17.247 ] 00:20:17.247 10:07:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.247 10:07:15 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:17.247 10:07:15 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:17.247 10:07:15 -- host/aer.sh@33 -- # aerpid=93021 00:20:17.247 10:07:15 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:17.247 10:07:15 -- common/autotest_common.sh@1254 -- # local i=0 00:20:17.247 10:07:15 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:17.247 10:07:15 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:17.247 10:07:15 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:20:17.247 10:07:15 -- common/autotest_common.sh@1257 -- # i=1 00:20:17.247 10:07:15 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:17.247 10:07:15 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:17.247 10:07:15 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:20:17.247 10:07:15 -- common/autotest_common.sh@1257 -- # i=2 00:20:17.247 10:07:15 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:17.507 10:07:15 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:17.507 10:07:15 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:17.507 10:07:15 -- common/autotest_common.sh@1265 -- # return 0 00:20:17.507 10:07:15 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:17.507 10:07:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.507 10:07:15 -- common/autotest_common.sh@10 -- # set +x 00:20:17.507 Malloc1 00:20:17.507 10:07:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.507 10:07:15 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:17.507 10:07:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.507 10:07:15 -- common/autotest_common.sh@10 -- # set +x 00:20:17.507 10:07:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.507 10:07:15 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:17.507 10:07:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.507 10:07:15 -- common/autotest_common.sh@10 -- # set +x 00:20:17.507 [ 00:20:17.507 { 00:20:17.507 "allow_any_host": true, 00:20:17.507 "hosts": [], 00:20:17.507 "listen_addresses": [], 00:20:17.507 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:17.507 "subtype": "Discovery" 00:20:17.507 }, 00:20:17.507 { 00:20:17.507 "allow_any_host": true, 00:20:17.507 "hosts": [], 00:20:17.507 "listen_addresses": [ 00:20:17.507 { 00:20:17.507 "adrfam": "IPv4", 00:20:17.507 "traddr": "10.0.0.2", 00:20:17.507 "transport": "TCP", 00:20:17.507 "trsvcid": "4420", 00:20:17.507 "trtype": "TCP" 00:20:17.507 } 00:20:17.507 ], 00:20:17.507 "max_cntlid": 65519, 00:20:17.507 "max_namespaces": 2, 00:20:17.507 "min_cntlid": 1, 00:20:17.507 Asynchronous Event Request test 00:20:17.507 Attaching to 10.0.0.2 00:20:17.507 Attached to 10.0.0.2 00:20:17.507 Registering asynchronous event callbacks... 00:20:17.507 Starting namespace attribute notice tests for all controllers... 00:20:17.507 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:17.507 aer_cb - Changed Namespace 00:20:17.507 Cleaning up... 00:20:17.507 "model_number": "SPDK bdev Controller", 00:20:17.507 "namespaces": [ 00:20:17.507 { 00:20:17.507 "bdev_name": "Malloc0", 00:20:17.507 "name": "Malloc0", 00:20:17.507 "nguid": "EA64DE89D7C54CA4B1396FE6EF237997", 00:20:17.507 "nsid": 1, 00:20:17.507 "uuid": "ea64de89-d7c5-4ca4-b139-6fe6ef237997" 00:20:17.507 }, 00:20:17.507 { 00:20:17.507 "bdev_name": "Malloc1", 00:20:17.507 "name": "Malloc1", 00:20:17.507 "nguid": "2FFE92CF64B648F5928DA5B9EC3D8D8F", 00:20:17.507 "nsid": 2, 00:20:17.507 "uuid": "2ffe92cf-64b6-48f5-928d-a5b9ec3d8d8f" 00:20:17.507 } 00:20:17.507 ], 00:20:17.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.507 "serial_number": "SPDK00000000000001", 00:20:17.507 "subtype": "NVMe" 00:20:17.507 } 00:20:17.507 ] 00:20:17.507 10:07:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.507 10:07:15 -- host/aer.sh@43 -- # wait 93021 00:20:17.507 10:07:15 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:17.507 10:07:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.507 10:07:15 -- common/autotest_common.sh@10 -- # set +x 00:20:17.507 10:07:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.507 10:07:15 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:17.507 10:07:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.507 10:07:15 -- common/autotest_common.sh@10 -- # set +x 00:20:17.507 10:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.507 10:07:16 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:17.507 10:07:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.507 10:07:16 -- common/autotest_common.sh@10 -- # set +x 00:20:17.507 10:07:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.507 10:07:16 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:17.507 10:07:16 -- host/aer.sh@51 -- # nvmftestfini 00:20:17.507 10:07:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:17.507 10:07:16 -- nvmf/common.sh@116 -- # sync 00:20:17.507 10:07:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:17.507 10:07:16 -- nvmf/common.sh@119 -- # set +e 00:20:17.507 10:07:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:17.507 10:07:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:17.507 rmmod nvme_tcp 00:20:17.507 rmmod nvme_fabrics 00:20:17.507 rmmod nvme_keyring 00:20:17.766 10:07:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:17.766 10:07:16 -- nvmf/common.sh@123 -- # set -e 00:20:17.766 10:07:16 -- nvmf/common.sh@124 -- # return 0 00:20:17.766 10:07:16 -- nvmf/common.sh@477 -- # '[' -n 92967 ']' 00:20:17.766 10:07:16 -- nvmf/common.sh@478 -- # killprocess 92967 00:20:17.766 10:07:16 -- common/autotest_common.sh@936 -- # '[' -z 92967 ']' 00:20:17.766 10:07:16 -- common/autotest_common.sh@940 -- # kill -0 92967 00:20:17.766 10:07:16 -- common/autotest_common.sh@941 -- # uname 00:20:17.766 10:07:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:17.766 10:07:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92967 00:20:17.766 10:07:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:17.766 10:07:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:17.766 killing process with pid 92967 00:20:17.766 10:07:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92967' 00:20:17.766 10:07:16 -- common/autotest_common.sh@955 -- # kill 92967 00:20:17.766 [2024-12-16 10:07:16.188142] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:17.766 10:07:16 -- common/autotest_common.sh@960 -- # wait 92967 00:20:17.766 10:07:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:17.766 10:07:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:17.766 10:07:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:17.766 10:07:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.766 10:07:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:17.766 10:07:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.766 10:07:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.766 10:07:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.026 10:07:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:18.026 00:20:18.026 real 0m2.401s 00:20:18.026 user 0m6.632s 00:20:18.026 sys 0m0.709s 00:20:18.026 10:07:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:18.026 10:07:16 -- common/autotest_common.sh@10 -- # set +x 00:20:18.026 ************************************ 00:20:18.026 END TEST nvmf_aer 00:20:18.026 ************************************ 00:20:18.026 10:07:16 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:18.026 10:07:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:18.026 10:07:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:18.026 10:07:16 -- common/autotest_common.sh@10 -- # set +x 00:20:18.026 ************************************ 00:20:18.026 START TEST nvmf_async_init 00:20:18.026 ************************************ 00:20:18.026 10:07:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:18.026 * Looking for test storage... 00:20:18.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:18.026 10:07:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:18.026 10:07:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:18.026 10:07:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:18.026 10:07:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:18.026 10:07:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:18.026 10:07:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:18.026 10:07:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:18.026 10:07:16 -- scripts/common.sh@335 -- # IFS=.-: 00:20:18.026 10:07:16 -- scripts/common.sh@335 -- # read -ra ver1 00:20:18.026 10:07:16 -- scripts/common.sh@336 -- # IFS=.-: 00:20:18.026 10:07:16 -- scripts/common.sh@336 -- # read -ra ver2 00:20:18.026 10:07:16 -- scripts/common.sh@337 -- # local 'op=<' 00:20:18.026 10:07:16 -- scripts/common.sh@339 -- # ver1_l=2 00:20:18.026 10:07:16 -- scripts/common.sh@340 -- # ver2_l=1 00:20:18.026 10:07:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:18.026 10:07:16 -- scripts/common.sh@343 -- # case "$op" in 00:20:18.026 10:07:16 -- scripts/common.sh@344 -- # : 1 00:20:18.026 10:07:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:18.026 10:07:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:18.026 10:07:16 -- scripts/common.sh@364 -- # decimal 1 00:20:18.026 10:07:16 -- scripts/common.sh@352 -- # local d=1 00:20:18.026 10:07:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:18.026 10:07:16 -- scripts/common.sh@354 -- # echo 1 00:20:18.026 10:07:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:18.026 10:07:16 -- scripts/common.sh@365 -- # decimal 2 00:20:18.026 10:07:16 -- scripts/common.sh@352 -- # local d=2 00:20:18.026 10:07:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:18.026 10:07:16 -- scripts/common.sh@354 -- # echo 2 00:20:18.026 10:07:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:18.026 10:07:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:18.026 10:07:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:18.026 10:07:16 -- scripts/common.sh@367 -- # return 0 00:20:18.026 10:07:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:18.026 10:07:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:18.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.026 --rc genhtml_branch_coverage=1 00:20:18.026 --rc genhtml_function_coverage=1 00:20:18.026 --rc genhtml_legend=1 00:20:18.026 --rc geninfo_all_blocks=1 00:20:18.026 --rc geninfo_unexecuted_blocks=1 00:20:18.026 00:20:18.026 ' 00:20:18.026 10:07:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:18.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.026 --rc genhtml_branch_coverage=1 00:20:18.026 --rc genhtml_function_coverage=1 00:20:18.026 --rc genhtml_legend=1 00:20:18.026 --rc geninfo_all_blocks=1 00:20:18.026 --rc geninfo_unexecuted_blocks=1 00:20:18.026 00:20:18.026 ' 00:20:18.026 10:07:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:18.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.026 --rc genhtml_branch_coverage=1 00:20:18.026 --rc genhtml_function_coverage=1 00:20:18.026 --rc genhtml_legend=1 00:20:18.026 --rc geninfo_all_blocks=1 00:20:18.026 --rc geninfo_unexecuted_blocks=1 00:20:18.026 00:20:18.026 ' 00:20:18.026 10:07:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:18.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.026 --rc genhtml_branch_coverage=1 00:20:18.026 --rc genhtml_function_coverage=1 00:20:18.026 --rc genhtml_legend=1 00:20:18.026 --rc geninfo_all_blocks=1 00:20:18.027 --rc geninfo_unexecuted_blocks=1 00:20:18.027 00:20:18.027 ' 00:20:18.027 10:07:16 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:18.027 10:07:16 -- nvmf/common.sh@7 -- # uname -s 00:20:18.027 10:07:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.027 10:07:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.027 10:07:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.027 10:07:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.027 10:07:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.027 10:07:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.027 10:07:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.027 10:07:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.027 10:07:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.027 10:07:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.027 10:07:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:20:18.027 10:07:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:20:18.027 10:07:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.027 10:07:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.027 10:07:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:18.027 10:07:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:18.027 10:07:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.027 10:07:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.027 10:07:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.027 10:07:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.027 10:07:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.027 10:07:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.027 10:07:16 -- paths/export.sh@5 -- # export PATH 00:20:18.027 10:07:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.027 10:07:16 -- nvmf/common.sh@46 -- # : 0 00:20:18.027 10:07:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:18.027 10:07:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:18.027 10:07:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:18.027 10:07:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.027 10:07:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.027 10:07:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:18.027 10:07:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:18.027 10:07:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:18.027 10:07:16 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:18.027 10:07:16 -- host/async_init.sh@14 -- # null_block_size=512 00:20:18.027 10:07:16 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:18.027 10:07:16 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:18.027 10:07:16 -- host/async_init.sh@20 -- # uuidgen 00:20:18.027 10:07:16 -- host/async_init.sh@20 -- # tr -d - 00:20:18.027 10:07:16 -- host/async_init.sh@20 -- # nguid=8eeb3ee82db448f6ab809db92855d91f 00:20:18.027 10:07:16 -- host/async_init.sh@22 -- # nvmftestinit 00:20:18.027 10:07:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:18.027 10:07:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.027 10:07:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:18.027 10:07:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:18.027 10:07:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:18.027 10:07:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.027 10:07:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.027 10:07:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.286 10:07:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:18.286 10:07:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:18.286 10:07:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:18.286 10:07:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:18.286 10:07:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:18.286 10:07:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:18.286 10:07:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.286 10:07:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.286 10:07:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:18.286 10:07:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:18.286 10:07:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:18.286 10:07:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:18.286 10:07:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:18.286 10:07:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.286 10:07:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:18.286 10:07:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:18.286 10:07:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:18.286 10:07:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:18.286 10:07:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:18.286 10:07:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:18.286 Cannot find device "nvmf_tgt_br" 00:20:18.286 10:07:16 -- nvmf/common.sh@154 -- # true 00:20:18.286 10:07:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:18.286 Cannot find device "nvmf_tgt_br2" 00:20:18.286 10:07:16 -- nvmf/common.sh@155 -- # true 00:20:18.286 10:07:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:18.286 10:07:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:18.286 Cannot find device "nvmf_tgt_br" 00:20:18.286 10:07:16 -- nvmf/common.sh@157 -- # true 00:20:18.286 10:07:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:18.286 Cannot find device "nvmf_tgt_br2" 00:20:18.286 10:07:16 -- nvmf/common.sh@158 -- # true 00:20:18.286 10:07:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:18.286 10:07:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:18.286 10:07:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:18.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.286 10:07:16 -- nvmf/common.sh@161 -- # true 00:20:18.286 10:07:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:18.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.286 10:07:16 -- nvmf/common.sh@162 -- # true 00:20:18.286 10:07:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:18.286 10:07:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:18.286 10:07:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:18.286 10:07:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:18.286 10:07:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:18.286 10:07:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:18.286 10:07:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:18.286 10:07:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:18.286 10:07:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:18.286 10:07:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:18.286 10:07:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:18.286 10:07:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:18.286 10:07:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:18.287 10:07:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:18.287 10:07:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:18.287 10:07:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:18.287 10:07:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:18.287 10:07:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:18.287 10:07:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:18.546 10:07:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:18.546 10:07:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:18.546 10:07:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:18.546 10:07:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:18.546 10:07:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:18.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:20:18.546 00:20:18.546 --- 10.0.0.2 ping statistics --- 00:20:18.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.546 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:20:18.546 10:07:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:18.546 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:18.546 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:20:18.546 00:20:18.546 --- 10.0.0.3 ping statistics --- 00:20:18.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.546 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:18.546 10:07:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:18.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:20:18.546 00:20:18.546 --- 10.0.0.1 ping statistics --- 00:20:18.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.546 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:18.546 10:07:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.546 10:07:16 -- nvmf/common.sh@421 -- # return 0 00:20:18.546 10:07:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:18.546 10:07:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.546 10:07:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:18.546 10:07:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:18.546 10:07:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.546 10:07:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:18.546 10:07:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:18.546 10:07:16 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:18.546 10:07:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:18.546 10:07:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:18.546 10:07:16 -- common/autotest_common.sh@10 -- # set +x 00:20:18.546 10:07:16 -- nvmf/common.sh@469 -- # nvmfpid=93201 00:20:18.546 10:07:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:18.546 10:07:16 -- nvmf/common.sh@470 -- # waitforlisten 93201 00:20:18.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.546 10:07:16 -- common/autotest_common.sh@829 -- # '[' -z 93201 ']' 00:20:18.546 10:07:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.546 10:07:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.546 10:07:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.546 10:07:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.546 10:07:16 -- common/autotest_common.sh@10 -- # set +x 00:20:18.546 [2024-12-16 10:07:17.042713] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:18.546 [2024-12-16 10:07:17.042790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.805 [2024-12-16 10:07:17.183963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.805 [2024-12-16 10:07:17.239534] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:18.805 [2024-12-16 10:07:17.239677] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.805 [2024-12-16 10:07:17.239688] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.805 [2024-12-16 10:07:17.239706] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.805 [2024-12-16 10:07:17.239730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.373 10:07:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.373 10:07:17 -- common/autotest_common.sh@862 -- # return 0 00:20:19.373 10:07:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:19.373 10:07:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.373 10:07:17 -- common/autotest_common.sh@10 -- # set +x 00:20:19.632 10:07:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.632 10:07:18 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:19.632 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.632 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.632 [2024-12-16 10:07:18.044354] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.632 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.632 10:07:18 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:19.632 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.632 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.632 null0 00:20:19.632 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.632 10:07:18 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:19.632 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.632 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.632 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.632 10:07:18 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:19.632 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.632 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.632 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.632 10:07:18 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8eeb3ee82db448f6ab809db92855d91f 00:20:19.632 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.632 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.632 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.632 10:07:18 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:19.632 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.632 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.632 [2024-12-16 10:07:18.084502] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.632 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.632 10:07:18 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:19.632 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.632 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.891 nvme0n1 00:20:19.891 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.891 10:07:18 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:19.891 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.891 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.891 [ 00:20:19.891 { 00:20:19.891 "aliases": [ 00:20:19.891 "8eeb3ee8-2db4-48f6-ab80-9db92855d91f" 00:20:19.891 ], 00:20:19.891 "assigned_rate_limits": { 00:20:19.891 "r_mbytes_per_sec": 0, 00:20:19.891 "rw_ios_per_sec": 0, 00:20:19.891 "rw_mbytes_per_sec": 0, 00:20:19.891 "w_mbytes_per_sec": 0 00:20:19.891 }, 00:20:19.891 "block_size": 512, 00:20:19.891 "claimed": false, 00:20:19.891 "driver_specific": { 00:20:19.891 "mp_policy": "active_passive", 00:20:19.891 "nvme": [ 00:20:19.891 { 00:20:19.891 "ctrlr_data": { 00:20:19.891 "ana_reporting": false, 00:20:19.891 "cntlid": 1, 00:20:19.891 "firmware_revision": "24.01.1", 00:20:19.891 "model_number": "SPDK bdev Controller", 00:20:19.891 "multi_ctrlr": true, 00:20:19.891 "oacs": { 00:20:19.891 "firmware": 0, 00:20:19.891 "format": 0, 00:20:19.891 "ns_manage": 0, 00:20:19.891 "security": 0 00:20:19.891 }, 00:20:19.891 "serial_number": "00000000000000000000", 00:20:19.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.891 "vendor_id": "0x8086" 00:20:19.891 }, 00:20:19.891 "ns_data": { 00:20:19.891 "can_share": true, 00:20:19.891 "id": 1 00:20:19.891 }, 00:20:19.891 "trid": { 00:20:19.891 "adrfam": "IPv4", 00:20:19.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.891 "traddr": "10.0.0.2", 00:20:19.891 "trsvcid": "4420", 00:20:19.891 "trtype": "TCP" 00:20:19.891 }, 00:20:19.891 "vs": { 00:20:19.891 "nvme_version": "1.3" 00:20:19.891 } 00:20:19.891 } 00:20:19.891 ] 00:20:19.891 }, 00:20:19.891 "name": "nvme0n1", 00:20:19.891 "num_blocks": 2097152, 00:20:19.891 "product_name": "NVMe disk", 00:20:19.891 "supported_io_types": { 00:20:19.891 "abort": true, 00:20:19.891 "compare": true, 00:20:19.891 "compare_and_write": true, 00:20:19.891 "flush": true, 00:20:19.891 "nvme_admin": true, 00:20:19.891 "nvme_io": true, 00:20:19.891 "read": true, 00:20:19.891 "reset": true, 00:20:19.891 "unmap": false, 00:20:19.891 "write": true, 00:20:19.891 "write_zeroes": true 00:20:19.891 }, 00:20:19.891 "uuid": "8eeb3ee8-2db4-48f6-ab80-9db92855d91f", 00:20:19.891 "zoned": false 00:20:19.891 } 00:20:19.891 ] 00:20:19.891 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.891 10:07:18 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:19.891 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.891 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.891 [2024-12-16 10:07:18.340460] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:19.891 [2024-12-16 10:07:18.340544] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff3a00 (9): Bad file descriptor 00:20:19.891 [2024-12-16 10:07:18.472467] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:19.891 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.891 10:07:18 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:19.891 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.891 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.891 [ 00:20:19.891 { 00:20:19.891 "aliases": [ 00:20:19.891 "8eeb3ee8-2db4-48f6-ab80-9db92855d91f" 00:20:19.891 ], 00:20:19.891 "assigned_rate_limits": { 00:20:19.891 "r_mbytes_per_sec": 0, 00:20:19.891 "rw_ios_per_sec": 0, 00:20:19.891 "rw_mbytes_per_sec": 0, 00:20:19.891 "w_mbytes_per_sec": 0 00:20:19.891 }, 00:20:19.891 "block_size": 512, 00:20:19.891 "claimed": false, 00:20:19.891 "driver_specific": { 00:20:19.891 "mp_policy": "active_passive", 00:20:19.891 "nvme": [ 00:20:19.891 { 00:20:19.891 "ctrlr_data": { 00:20:19.891 "ana_reporting": false, 00:20:19.891 "cntlid": 2, 00:20:19.891 "firmware_revision": "24.01.1", 00:20:19.891 "model_number": "SPDK bdev Controller", 00:20:19.891 "multi_ctrlr": true, 00:20:19.891 "oacs": { 00:20:19.891 "firmware": 0, 00:20:19.891 "format": 0, 00:20:19.891 "ns_manage": 0, 00:20:19.891 "security": 0 00:20:19.891 }, 00:20:19.891 "serial_number": "00000000000000000000", 00:20:19.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.891 "vendor_id": "0x8086" 00:20:19.891 }, 00:20:19.891 "ns_data": { 00:20:19.891 "can_share": true, 00:20:19.891 "id": 1 00:20:19.891 }, 00:20:19.891 "trid": { 00:20:19.891 "adrfam": "IPv4", 00:20:19.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.891 "traddr": "10.0.0.2", 00:20:19.891 "trsvcid": "4420", 00:20:19.891 "trtype": "TCP" 00:20:19.891 }, 00:20:19.891 "vs": { 00:20:19.891 "nvme_version": "1.3" 00:20:19.891 } 00:20:19.891 } 00:20:19.891 ] 00:20:19.891 }, 00:20:19.891 "name": "nvme0n1", 00:20:19.891 "num_blocks": 2097152, 00:20:19.891 "product_name": "NVMe disk", 00:20:19.891 "supported_io_types": { 00:20:19.891 "abort": true, 00:20:19.891 "compare": true, 00:20:19.891 "compare_and_write": true, 00:20:19.891 "flush": true, 00:20:19.891 "nvme_admin": true, 00:20:19.891 "nvme_io": true, 00:20:19.891 "read": true, 00:20:19.891 "reset": true, 00:20:19.891 "unmap": false, 00:20:19.891 "write": true, 00:20:19.891 "write_zeroes": true 00:20:19.891 }, 00:20:19.891 "uuid": "8eeb3ee8-2db4-48f6-ab80-9db92855d91f", 00:20:19.891 "zoned": false 00:20:19.891 } 00:20:19.891 ] 00:20:19.891 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.891 10:07:18 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.891 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.891 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:19.891 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.151 10:07:18 -- host/async_init.sh@53 -- # mktemp 00:20:20.151 10:07:18 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.rxIghv7uDO 00:20:20.151 10:07:18 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:20.151 10:07:18 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.rxIghv7uDO 00:20:20.151 10:07:18 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:20.151 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.151 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.151 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.151 10:07:18 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:20.151 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.151 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.151 [2024-12-16 10:07:18.536566] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:20.151 [2024-12-16 10:07:18.536705] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:20.151 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.151 10:07:18 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rxIghv7uDO 00:20:20.151 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.151 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.151 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.151 10:07:18 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rxIghv7uDO 00:20:20.151 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.151 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.151 [2024-12-16 10:07:18.556576] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:20.151 nvme0n1 00:20:20.151 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.151 10:07:18 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:20.151 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.151 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.151 [ 00:20:20.151 { 00:20:20.151 "aliases": [ 00:20:20.151 "8eeb3ee8-2db4-48f6-ab80-9db92855d91f" 00:20:20.151 ], 00:20:20.151 "assigned_rate_limits": { 00:20:20.151 "r_mbytes_per_sec": 0, 00:20:20.151 "rw_ios_per_sec": 0, 00:20:20.151 "rw_mbytes_per_sec": 0, 00:20:20.151 "w_mbytes_per_sec": 0 00:20:20.151 }, 00:20:20.151 "block_size": 512, 00:20:20.151 "claimed": false, 00:20:20.151 "driver_specific": { 00:20:20.151 "mp_policy": "active_passive", 00:20:20.151 "nvme": [ 00:20:20.151 { 00:20:20.151 "ctrlr_data": { 00:20:20.151 "ana_reporting": false, 00:20:20.151 "cntlid": 3, 00:20:20.151 "firmware_revision": "24.01.1", 00:20:20.151 "model_number": "SPDK bdev Controller", 00:20:20.151 "multi_ctrlr": true, 00:20:20.151 "oacs": { 00:20:20.151 "firmware": 0, 00:20:20.151 "format": 0, 00:20:20.151 "ns_manage": 0, 00:20:20.151 "security": 0 00:20:20.151 }, 00:20:20.151 "serial_number": "00000000000000000000", 00:20:20.151 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:20.151 "vendor_id": "0x8086" 00:20:20.151 }, 00:20:20.151 "ns_data": { 00:20:20.151 "can_share": true, 00:20:20.151 "id": 1 00:20:20.151 }, 00:20:20.151 "trid": { 00:20:20.151 "adrfam": "IPv4", 00:20:20.151 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:20.151 "traddr": "10.0.0.2", 00:20:20.151 "trsvcid": "4421", 00:20:20.151 "trtype": "TCP" 00:20:20.151 }, 00:20:20.151 "vs": { 00:20:20.151 "nvme_version": "1.3" 00:20:20.151 } 00:20:20.151 } 00:20:20.151 ] 00:20:20.151 }, 00:20:20.151 "name": "nvme0n1", 00:20:20.151 "num_blocks": 2097152, 00:20:20.151 "product_name": "NVMe disk", 00:20:20.151 "supported_io_types": { 00:20:20.151 "abort": true, 00:20:20.151 "compare": true, 00:20:20.151 "compare_and_write": true, 00:20:20.151 "flush": true, 00:20:20.151 "nvme_admin": true, 00:20:20.151 "nvme_io": true, 00:20:20.151 "read": true, 00:20:20.151 "reset": true, 00:20:20.151 "unmap": false, 00:20:20.151 "write": true, 00:20:20.151 "write_zeroes": true 00:20:20.151 }, 00:20:20.151 "uuid": "8eeb3ee8-2db4-48f6-ab80-9db92855d91f", 00:20:20.151 "zoned": false 00:20:20.151 } 00:20:20.151 ] 00:20:20.151 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.151 10:07:18 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.151 10:07:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.151 10:07:18 -- common/autotest_common.sh@10 -- # set +x 00:20:20.151 10:07:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.151 10:07:18 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.rxIghv7uDO 00:20:20.151 10:07:18 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:20.151 10:07:18 -- host/async_init.sh@78 -- # nvmftestfini 00:20:20.151 10:07:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:20.151 10:07:18 -- nvmf/common.sh@116 -- # sync 00:20:20.151 10:07:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:20.151 10:07:18 -- nvmf/common.sh@119 -- # set +e 00:20:20.151 10:07:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:20.151 10:07:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:20.151 rmmod nvme_tcp 00:20:20.151 rmmod nvme_fabrics 00:20:20.410 rmmod nvme_keyring 00:20:20.410 10:07:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:20.410 10:07:18 -- nvmf/common.sh@123 -- # set -e 00:20:20.410 10:07:18 -- nvmf/common.sh@124 -- # return 0 00:20:20.410 10:07:18 -- nvmf/common.sh@477 -- # '[' -n 93201 ']' 00:20:20.410 10:07:18 -- nvmf/common.sh@478 -- # killprocess 93201 00:20:20.410 10:07:18 -- common/autotest_common.sh@936 -- # '[' -z 93201 ']' 00:20:20.410 10:07:18 -- common/autotest_common.sh@940 -- # kill -0 93201 00:20:20.410 10:07:18 -- common/autotest_common.sh@941 -- # uname 00:20:20.410 10:07:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:20.410 10:07:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93201 00:20:20.410 10:07:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:20.410 10:07:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:20.410 killing process with pid 93201 00:20:20.410 10:07:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93201' 00:20:20.410 10:07:18 -- common/autotest_common.sh@955 -- # kill 93201 00:20:20.410 10:07:18 -- common/autotest_common.sh@960 -- # wait 93201 00:20:20.410 10:07:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:20.410 10:07:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:20.410 10:07:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:20.410 10:07:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:20.410 10:07:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:20.410 10:07:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.410 10:07:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.410 10:07:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.670 10:07:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:20.670 00:20:20.670 real 0m2.593s 00:20:20.670 user 0m2.401s 00:20:20.670 sys 0m0.628s 00:20:20.670 10:07:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:20.670 10:07:19 -- common/autotest_common.sh@10 -- # set +x 00:20:20.670 ************************************ 00:20:20.670 END TEST nvmf_async_init 00:20:20.670 ************************************ 00:20:20.670 10:07:19 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:20.670 10:07:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:20.670 10:07:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:20.670 10:07:19 -- common/autotest_common.sh@10 -- # set +x 00:20:20.670 ************************************ 00:20:20.670 START TEST dma 00:20:20.670 ************************************ 00:20:20.670 10:07:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:20.670 * Looking for test storage... 00:20:20.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:20.670 10:07:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:20.670 10:07:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:20.670 10:07:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:20.670 10:07:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:20.670 10:07:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:20.670 10:07:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:20.670 10:07:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:20.670 10:07:19 -- scripts/common.sh@335 -- # IFS=.-: 00:20:20.670 10:07:19 -- scripts/common.sh@335 -- # read -ra ver1 00:20:20.670 10:07:19 -- scripts/common.sh@336 -- # IFS=.-: 00:20:20.670 10:07:19 -- scripts/common.sh@336 -- # read -ra ver2 00:20:20.670 10:07:19 -- scripts/common.sh@337 -- # local 'op=<' 00:20:20.670 10:07:19 -- scripts/common.sh@339 -- # ver1_l=2 00:20:20.670 10:07:19 -- scripts/common.sh@340 -- # ver2_l=1 00:20:20.670 10:07:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:20.670 10:07:19 -- scripts/common.sh@343 -- # case "$op" in 00:20:20.670 10:07:19 -- scripts/common.sh@344 -- # : 1 00:20:20.670 10:07:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:20.670 10:07:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:20.670 10:07:19 -- scripts/common.sh@364 -- # decimal 1 00:20:20.670 10:07:19 -- scripts/common.sh@352 -- # local d=1 00:20:20.670 10:07:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:20.670 10:07:19 -- scripts/common.sh@354 -- # echo 1 00:20:20.670 10:07:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:20.670 10:07:19 -- scripts/common.sh@365 -- # decimal 2 00:20:20.670 10:07:19 -- scripts/common.sh@352 -- # local d=2 00:20:20.670 10:07:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:20.670 10:07:19 -- scripts/common.sh@354 -- # echo 2 00:20:20.670 10:07:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:20.670 10:07:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:20.670 10:07:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:20.670 10:07:19 -- scripts/common.sh@367 -- # return 0 00:20:20.670 10:07:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:20.670 10:07:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:20.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.670 --rc genhtml_branch_coverage=1 00:20:20.670 --rc genhtml_function_coverage=1 00:20:20.670 --rc genhtml_legend=1 00:20:20.670 --rc geninfo_all_blocks=1 00:20:20.670 --rc geninfo_unexecuted_blocks=1 00:20:20.670 00:20:20.670 ' 00:20:20.670 10:07:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:20.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.670 --rc genhtml_branch_coverage=1 00:20:20.670 --rc genhtml_function_coverage=1 00:20:20.670 --rc genhtml_legend=1 00:20:20.670 --rc geninfo_all_blocks=1 00:20:20.670 --rc geninfo_unexecuted_blocks=1 00:20:20.670 00:20:20.670 ' 00:20:20.670 10:07:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:20.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.670 --rc genhtml_branch_coverage=1 00:20:20.670 --rc genhtml_function_coverage=1 00:20:20.670 --rc genhtml_legend=1 00:20:20.670 --rc geninfo_all_blocks=1 00:20:20.670 --rc geninfo_unexecuted_blocks=1 00:20:20.670 00:20:20.670 ' 00:20:20.670 10:07:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:20.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.670 --rc genhtml_branch_coverage=1 00:20:20.670 --rc genhtml_function_coverage=1 00:20:20.670 --rc genhtml_legend=1 00:20:20.670 --rc geninfo_all_blocks=1 00:20:20.670 --rc geninfo_unexecuted_blocks=1 00:20:20.670 00:20:20.670 ' 00:20:20.670 10:07:19 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:20.931 10:07:19 -- nvmf/common.sh@7 -- # uname -s 00:20:20.931 10:07:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.931 10:07:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.931 10:07:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.931 10:07:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.931 10:07:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.931 10:07:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.931 10:07:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.931 10:07:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.931 10:07:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.931 10:07:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.931 10:07:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:20:20.931 10:07:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:20:20.931 10:07:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.931 10:07:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.931 10:07:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:20.931 10:07:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:20.931 10:07:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.931 10:07:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.931 10:07:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.932 10:07:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.932 10:07:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.932 10:07:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.932 10:07:19 -- paths/export.sh@5 -- # export PATH 00:20:20.932 10:07:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.932 10:07:19 -- nvmf/common.sh@46 -- # : 0 00:20:20.932 10:07:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:20.932 10:07:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:20.932 10:07:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:20.932 10:07:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.932 10:07:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.932 10:07:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:20.932 10:07:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:20.932 10:07:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:20.932 10:07:19 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:20.932 10:07:19 -- host/dma.sh@13 -- # exit 0 00:20:20.932 00:20:20.932 real 0m0.205s 00:20:20.932 user 0m0.127s 00:20:20.932 sys 0m0.083s 00:20:20.932 10:07:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:20.932 10:07:19 -- common/autotest_common.sh@10 -- # set +x 00:20:20.932 ************************************ 00:20:20.932 END TEST dma 00:20:20.932 ************************************ 00:20:20.932 10:07:19 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:20.932 10:07:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:20.932 10:07:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:20.932 10:07:19 -- common/autotest_common.sh@10 -- # set +x 00:20:20.932 ************************************ 00:20:20.932 START TEST nvmf_identify 00:20:20.932 ************************************ 00:20:20.932 10:07:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:20.932 * Looking for test storage... 00:20:20.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:20.932 10:07:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:20.932 10:07:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:20.932 10:07:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:20.932 10:07:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:20.932 10:07:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:20.932 10:07:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:20.932 10:07:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:20.932 10:07:19 -- scripts/common.sh@335 -- # IFS=.-: 00:20:20.932 10:07:19 -- scripts/common.sh@335 -- # read -ra ver1 00:20:20.932 10:07:19 -- scripts/common.sh@336 -- # IFS=.-: 00:20:20.932 10:07:19 -- scripts/common.sh@336 -- # read -ra ver2 00:20:20.932 10:07:19 -- scripts/common.sh@337 -- # local 'op=<' 00:20:20.932 10:07:19 -- scripts/common.sh@339 -- # ver1_l=2 00:20:20.932 10:07:19 -- scripts/common.sh@340 -- # ver2_l=1 00:20:20.932 10:07:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:20.932 10:07:19 -- scripts/common.sh@343 -- # case "$op" in 00:20:20.932 10:07:19 -- scripts/common.sh@344 -- # : 1 00:20:20.932 10:07:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:20.932 10:07:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:20.932 10:07:19 -- scripts/common.sh@364 -- # decimal 1 00:20:20.932 10:07:19 -- scripts/common.sh@352 -- # local d=1 00:20:20.932 10:07:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:20.932 10:07:19 -- scripts/common.sh@354 -- # echo 1 00:20:20.932 10:07:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:20.932 10:07:19 -- scripts/common.sh@365 -- # decimal 2 00:20:20.932 10:07:19 -- scripts/common.sh@352 -- # local d=2 00:20:20.932 10:07:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:20.932 10:07:19 -- scripts/common.sh@354 -- # echo 2 00:20:20.932 10:07:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:20.932 10:07:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:20.932 10:07:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:20.932 10:07:19 -- scripts/common.sh@367 -- # return 0 00:20:20.932 10:07:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:20.932 10:07:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:20.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.932 --rc genhtml_branch_coverage=1 00:20:20.932 --rc genhtml_function_coverage=1 00:20:20.932 --rc genhtml_legend=1 00:20:20.932 --rc geninfo_all_blocks=1 00:20:20.932 --rc geninfo_unexecuted_blocks=1 00:20:20.932 00:20:20.932 ' 00:20:20.932 10:07:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:20.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.932 --rc genhtml_branch_coverage=1 00:20:20.932 --rc genhtml_function_coverage=1 00:20:20.932 --rc genhtml_legend=1 00:20:20.932 --rc geninfo_all_blocks=1 00:20:20.932 --rc geninfo_unexecuted_blocks=1 00:20:20.932 00:20:20.932 ' 00:20:20.932 10:07:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:20.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.932 --rc genhtml_branch_coverage=1 00:20:20.932 --rc genhtml_function_coverage=1 00:20:20.932 --rc genhtml_legend=1 00:20:20.932 --rc geninfo_all_blocks=1 00:20:20.932 --rc geninfo_unexecuted_blocks=1 00:20:20.932 00:20:20.932 ' 00:20:20.932 10:07:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:20.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.932 --rc genhtml_branch_coverage=1 00:20:20.932 --rc genhtml_function_coverage=1 00:20:20.932 --rc genhtml_legend=1 00:20:20.932 --rc geninfo_all_blocks=1 00:20:20.932 --rc geninfo_unexecuted_blocks=1 00:20:20.932 00:20:20.932 ' 00:20:20.932 10:07:19 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:20.932 10:07:19 -- nvmf/common.sh@7 -- # uname -s 00:20:20.932 10:07:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.932 10:07:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.932 10:07:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.932 10:07:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.932 10:07:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.932 10:07:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.932 10:07:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.932 10:07:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.932 10:07:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.932 10:07:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.932 10:07:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:20:20.932 10:07:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:20:20.932 10:07:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.192 10:07:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.192 10:07:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.192 10:07:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.192 10:07:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.192 10:07:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.192 10:07:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.192 10:07:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.192 10:07:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.192 10:07:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.192 10:07:19 -- paths/export.sh@5 -- # export PATH 00:20:21.192 10:07:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.192 10:07:19 -- nvmf/common.sh@46 -- # : 0 00:20:21.192 10:07:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:21.192 10:07:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:21.192 10:07:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:21.192 10:07:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.192 10:07:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.192 10:07:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:21.192 10:07:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:21.192 10:07:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:21.192 10:07:19 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:21.192 10:07:19 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:21.192 10:07:19 -- host/identify.sh@14 -- # nvmftestinit 00:20:21.192 10:07:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:21.192 10:07:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.192 10:07:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:21.192 10:07:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:21.192 10:07:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:21.192 10:07:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.192 10:07:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.192 10:07:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.192 10:07:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:21.192 10:07:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:21.192 10:07:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:21.192 10:07:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:21.192 10:07:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:21.192 10:07:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:21.192 10:07:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.192 10:07:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.192 10:07:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:21.192 10:07:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:21.192 10:07:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:21.192 10:07:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:21.192 10:07:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:21.192 10:07:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.192 10:07:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:21.192 10:07:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:21.192 10:07:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:21.192 10:07:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:21.192 10:07:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:21.192 10:07:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:21.192 Cannot find device "nvmf_tgt_br" 00:20:21.192 10:07:19 -- nvmf/common.sh@154 -- # true 00:20:21.192 10:07:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:21.192 Cannot find device "nvmf_tgt_br2" 00:20:21.192 10:07:19 -- nvmf/common.sh@155 -- # true 00:20:21.192 10:07:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:21.192 10:07:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:21.192 Cannot find device "nvmf_tgt_br" 00:20:21.192 10:07:19 -- nvmf/common.sh@157 -- # true 00:20:21.192 10:07:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:21.192 Cannot find device "nvmf_tgt_br2" 00:20:21.192 10:07:19 -- nvmf/common.sh@158 -- # true 00:20:21.192 10:07:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:21.192 10:07:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:21.192 10:07:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:21.192 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.192 10:07:19 -- nvmf/common.sh@161 -- # true 00:20:21.192 10:07:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:21.192 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.192 10:07:19 -- nvmf/common.sh@162 -- # true 00:20:21.192 10:07:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:21.192 10:07:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:21.192 10:07:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:21.192 10:07:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:21.192 10:07:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:21.192 10:07:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:21.192 10:07:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:21.192 10:07:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:21.193 10:07:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:21.193 10:07:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:21.193 10:07:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:21.193 10:07:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:21.193 10:07:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:21.193 10:07:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:21.193 10:07:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:21.193 10:07:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:21.193 10:07:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:21.193 10:07:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:21.193 10:07:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:21.458 10:07:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:21.458 10:07:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:21.458 10:07:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:21.458 10:07:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:21.458 10:07:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:21.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:20:21.458 00:20:21.458 --- 10.0.0.2 ping statistics --- 00:20:21.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.458 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:21.458 10:07:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:21.458 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:21.458 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:20:21.458 00:20:21.458 --- 10.0.0.3 ping statistics --- 00:20:21.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.458 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:21.458 10:07:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:21.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:21.458 00:20:21.458 --- 10.0.0.1 ping statistics --- 00:20:21.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.458 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:21.458 10:07:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.458 10:07:19 -- nvmf/common.sh@421 -- # return 0 00:20:21.458 10:07:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:21.458 10:07:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.458 10:07:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:21.458 10:07:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:21.458 10:07:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.458 10:07:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:21.458 10:07:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:21.458 10:07:19 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:21.458 10:07:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.458 10:07:19 -- common/autotest_common.sh@10 -- # set +x 00:20:21.458 10:07:19 -- host/identify.sh@19 -- # nvmfpid=93482 00:20:21.458 10:07:19 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:21.458 10:07:19 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:21.458 10:07:19 -- host/identify.sh@23 -- # waitforlisten 93482 00:20:21.458 10:07:19 -- common/autotest_common.sh@829 -- # '[' -z 93482 ']' 00:20:21.458 10:07:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.458 10:07:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.458 10:07:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.458 10:07:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.458 10:07:19 -- common/autotest_common.sh@10 -- # set +x 00:20:21.458 [2024-12-16 10:07:19.961573] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:21.458 [2024-12-16 10:07:19.961659] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.717 [2024-12-16 10:07:20.106384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:21.717 [2024-12-16 10:07:20.177257] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:21.717 [2024-12-16 10:07:20.177438] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.717 [2024-12-16 10:07:20.177456] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.717 [2024-12-16 10:07:20.177468] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.717 [2024-12-16 10:07:20.177566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.717 [2024-12-16 10:07:20.177967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.717 [2024-12-16 10:07:20.178964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:21.717 [2024-12-16 10:07:20.178998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.652 10:07:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.652 10:07:21 -- common/autotest_common.sh@862 -- # return 0 00:20:22.652 10:07:21 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:22.652 10:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.652 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.652 [2024-12-16 10:07:21.015945] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.652 10:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.652 10:07:21 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:22.653 10:07:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:22.653 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.653 10:07:21 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:22.653 10:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.653 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.653 Malloc0 00:20:22.653 10:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.653 10:07:21 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:22.653 10:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.653 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.653 10:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.653 10:07:21 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:22.653 10:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.653 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.653 10:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.653 10:07:21 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.653 10:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.653 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.653 [2024-12-16 10:07:21.119903] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.653 10:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.653 10:07:21 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:22.653 10:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.653 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.653 10:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.653 10:07:21 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:22.653 10:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.653 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:20:22.653 [2024-12-16 10:07:21.135662] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:22.653 [ 00:20:22.653 { 00:20:22.653 "allow_any_host": true, 00:20:22.653 "hosts": [], 00:20:22.653 "listen_addresses": [ 00:20:22.653 { 00:20:22.653 "adrfam": "IPv4", 00:20:22.653 "traddr": "10.0.0.2", 00:20:22.653 "transport": "TCP", 00:20:22.653 "trsvcid": "4420", 00:20:22.653 "trtype": "TCP" 00:20:22.653 } 00:20:22.653 ], 00:20:22.653 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:22.653 "subtype": "Discovery" 00:20:22.653 }, 00:20:22.653 { 00:20:22.653 "allow_any_host": true, 00:20:22.653 "hosts": [], 00:20:22.653 "listen_addresses": [ 00:20:22.653 { 00:20:22.653 "adrfam": "IPv4", 00:20:22.653 "traddr": "10.0.0.2", 00:20:22.653 "transport": "TCP", 00:20:22.653 "trsvcid": "4420", 00:20:22.653 "trtype": "TCP" 00:20:22.653 } 00:20:22.653 ], 00:20:22.653 "max_cntlid": 65519, 00:20:22.653 "max_namespaces": 32, 00:20:22.653 "min_cntlid": 1, 00:20:22.653 "model_number": "SPDK bdev Controller", 00:20:22.653 "namespaces": [ 00:20:22.653 { 00:20:22.653 "bdev_name": "Malloc0", 00:20:22.653 "eui64": "ABCDEF0123456789", 00:20:22.653 "name": "Malloc0", 00:20:22.653 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:22.653 "nsid": 1, 00:20:22.653 "uuid": "393ec9b0-d759-4ca6-af3c-1bc8997ca994" 00:20:22.653 } 00:20:22.653 ], 00:20:22.653 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.653 "serial_number": "SPDK00000000000001", 00:20:22.653 "subtype": "NVMe" 00:20:22.653 } 00:20:22.653 ] 00:20:22.653 10:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.653 10:07:21 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:22.653 [2024-12-16 10:07:21.179638] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:22.653 [2024-12-16 10:07:21.179694] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93535 ] 00:20:22.915 [2024-12-16 10:07:21.318458] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:22.915 [2024-12-16 10:07:21.318536] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:22.915 [2024-12-16 10:07:21.318542] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:22.915 [2024-12-16 10:07:21.318551] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:22.915 [2024-12-16 10:07:21.318560] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:22.915 [2024-12-16 10:07:21.318680] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:22.915 [2024-12-16 10:07:21.318762] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9eb510 0 00:20:22.915 [2024-12-16 10:07:21.323429] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:22.915 [2024-12-16 10:07:21.323464] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:22.915 [2024-12-16 10:07:21.323487] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:22.915 [2024-12-16 10:07:21.323491] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:22.915 [2024-12-16 10:07:21.323540] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.915 [2024-12-16 10:07:21.323548] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.915 [2024-12-16 10:07:21.323552] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eb510) 00:20:22.915 [2024-12-16 10:07:21.323585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:22.915 [2024-12-16 10:07:21.323625] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa378a0, cid 0, qid 0 00:20:22.916 [2024-12-16 10:07:21.331421] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.916 [2024-12-16 10:07:21.331522] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.916 [2024-12-16 10:07:21.331535] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.331548] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa378a0) on tqpair=0x9eb510 00:20:22.916 [2024-12-16 10:07:21.331580] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:22.916 [2024-12-16 10:07:21.331601] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:22.916 [2024-12-16 10:07:21.331615] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:22.916 [2024-12-16 10:07:21.331676] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.331696] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.331706] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eb510) 00:20:22.916 [2024-12-16 10:07:21.331736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.916 [2024-12-16 10:07:21.331884] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa378a0, cid 0, qid 0 00:20:22.916 [2024-12-16 10:07:21.331954] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.916 [2024-12-16 10:07:21.331972] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.916 [2024-12-16 10:07:21.331981] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.331991] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa378a0) on tqpair=0x9eb510 00:20:22.916 [2024-12-16 10:07:21.332005] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:22.916 [2024-12-16 10:07:21.332023] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:22.916 [2024-12-16 10:07:21.332041] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.332051] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.332061] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eb510) 00:20:22.916 [2024-12-16 10:07:21.332079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.916 [2024-12-16 10:07:21.332125] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa378a0, cid 0, qid 0 00:20:22.916 [2024-12-16 10:07:21.332191] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.916 [2024-12-16 10:07:21.332208] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.916 [2024-12-16 10:07:21.332217] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.332226] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa378a0) on tqpair=0x9eb510 00:20:22.916 [2024-12-16 10:07:21.332240] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:22.916 [2024-12-16 10:07:21.332261] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:22.916 [2024-12-16 10:07:21.332278] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.332288] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.332297] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eb510) 00:20:22.916 [2024-12-16 10:07:21.332314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.916 [2024-12-16 10:07:21.332379] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa378a0, cid 0, qid 0 00:20:22.916 [2024-12-16 10:07:21.332443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.916 [2024-12-16 10:07:21.332459] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.916 [2024-12-16 10:07:21.332468] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.332477] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa378a0) on tqpair=0x9eb510 00:20:22.916 [2024-12-16 10:07:21.332491] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:22.916 [2024-12-16 10:07:21.332514] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.332524] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.332534] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eb510) 00:20:22.916 [2024-12-16 10:07:21.332551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.916 [2024-12-16 10:07:21.332611] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa378a0, cid 0, qid 0 00:20:22.916 [2024-12-16 10:07:21.332670] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.916 [2024-12-16 10:07:21.332686] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.916 [2024-12-16 10:07:21.332694] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.332704] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa378a0) on tqpair=0x9eb510 00:20:22.916 [2024-12-16 10:07:21.332716] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:22.916 [2024-12-16 10:07:21.332728] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:22.916 [2024-12-16 10:07:21.332747] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:22.916 [2024-12-16 10:07:21.332867] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:22.916 [2024-12-16 10:07:21.332879] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:22.916 [2024-12-16 10:07:21.332905] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.332915] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.332924] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eb510) 00:20:22.916 [2024-12-16 10:07:21.332941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.916 [2024-12-16 10:07:21.332981] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa378a0, cid 0, qid 0 00:20:22.916 [2024-12-16 10:07:21.333057] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.916 [2024-12-16 10:07:21.333072] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.916 [2024-12-16 10:07:21.333081] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.333090] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa378a0) on tqpair=0x9eb510 00:20:22.916 [2024-12-16 10:07:21.333106] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:22.916 [2024-12-16 10:07:21.333128] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.333139] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.333148] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eb510) 00:20:22.916 [2024-12-16 10:07:21.333164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.916 [2024-12-16 10:07:21.333203] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa378a0, cid 0, qid 0 00:20:22.916 [2024-12-16 10:07:21.333256] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.916 [2024-12-16 10:07:21.333271] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.916 [2024-12-16 10:07:21.333280] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.333289] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa378a0) on tqpair=0x9eb510 00:20:22.916 [2024-12-16 10:07:21.333301] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:22.916 [2024-12-16 10:07:21.333314] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:22.916 [2024-12-16 10:07:21.333332] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:22.916 [2024-12-16 10:07:21.333401] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:22.916 [2024-12-16 10:07:21.333433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.333443] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.333452] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eb510) 00:20:22.916 [2024-12-16 10:07:21.333476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.916 [2024-12-16 10:07:21.333522] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa378a0, cid 0, qid 0 00:20:22.916 [2024-12-16 10:07:21.333620] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.916 [2024-12-16 10:07:21.333636] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.916 [2024-12-16 10:07:21.333645] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.333655] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9eb510): datao=0, datal=4096, cccid=0 00:20:22.916 [2024-12-16 10:07:21.333666] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa378a0) on tqpair(0x9eb510): expected_datao=0, payload_size=4096 00:20:22.916 [2024-12-16 10:07:21.333693] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.333703] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.333723] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.916 [2024-12-16 10:07:21.333737] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.916 [2024-12-16 10:07:21.333746] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.916 [2024-12-16 10:07:21.333755] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa378a0) on tqpair=0x9eb510 00:20:22.916 [2024-12-16 10:07:21.333777] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:22.916 [2024-12-16 10:07:21.333790] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:22.916 [2024-12-16 10:07:21.333802] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:22.916 [2024-12-16 10:07:21.333820] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:22.916 [2024-12-16 10:07:21.333831] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:22.916 [2024-12-16 10:07:21.333843] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:22.916 [2024-12-16 10:07:21.333873] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:22.917 [2024-12-16 10:07:21.333892] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.333903] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.333912] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eb510) 00:20:22.917 [2024-12-16 10:07:21.333931] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:22.917 [2024-12-16 10:07:21.333976] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa378a0, cid 0, qid 0 00:20:22.917 [2024-12-16 10:07:21.334081] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.917 [2024-12-16 10:07:21.334098] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.917 [2024-12-16 10:07:21.334120] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334126] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa378a0) on tqpair=0x9eb510 00:20:22.917 [2024-12-16 10:07:21.334138] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334143] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334148] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9eb510) 00:20:22.917 [2024-12-16 10:07:21.334157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.917 [2024-12-16 10:07:21.334166] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334171] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334176] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9eb510) 00:20:22.917 [2024-12-16 10:07:21.334184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.917 [2024-12-16 10:07:21.334193] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334198] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334203] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9eb510) 00:20:22.917 [2024-12-16 10:07:21.334210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.917 [2024-12-16 10:07:21.334218] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334223] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334228] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.917 [2024-12-16 10:07:21.334236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.917 [2024-12-16 10:07:21.334242] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:22.917 [2024-12-16 10:07:21.334260] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:22.917 [2024-12-16 10:07:21.334270] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334275] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334280] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9eb510) 00:20:22.917 [2024-12-16 10:07:21.334290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.917 [2024-12-16 10:07:21.334319] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa378a0, cid 0, qid 0 00:20:22.917 [2024-12-16 10:07:21.334329] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37a00, cid 1, qid 0 00:20:22.917 [2024-12-16 10:07:21.334337] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37b60, cid 2, qid 0 00:20:22.917 [2024-12-16 10:07:21.334343] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.917 [2024-12-16 10:07:21.334349] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37e20, cid 4, qid 0 00:20:22.917 [2024-12-16 10:07:21.334466] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.917 [2024-12-16 10:07:21.334477] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.917 [2024-12-16 10:07:21.334482] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334487] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37e20) on tqpair=0x9eb510 00:20:22.917 [2024-12-16 10:07:21.334495] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:22.917 [2024-12-16 10:07:21.334502] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:22.917 [2024-12-16 10:07:21.334517] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334523] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334528] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9eb510) 00:20:22.917 [2024-12-16 10:07:21.334538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.917 [2024-12-16 10:07:21.334565] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37e20, cid 4, qid 0 00:20:22.917 [2024-12-16 10:07:21.334639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.917 [2024-12-16 10:07:21.334648] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.917 [2024-12-16 10:07:21.334653] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334658] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9eb510): datao=0, datal=4096, cccid=4 00:20:22.917 [2024-12-16 10:07:21.334664] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa37e20) on tqpair(0x9eb510): expected_datao=0, payload_size=4096 00:20:22.917 [2024-12-16 10:07:21.334675] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334680] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334691] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.917 [2024-12-16 10:07:21.334699] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.917 [2024-12-16 10:07:21.334704] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334709] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37e20) on tqpair=0x9eb510 00:20:22.917 [2024-12-16 10:07:21.334736] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:22.917 [2024-12-16 10:07:21.334774] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334783] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334788] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9eb510) 00:20:22.917 [2024-12-16 10:07:21.334798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.917 [2024-12-16 10:07:21.334808] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334813] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334818] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9eb510) 00:20:22.917 [2024-12-16 10:07:21.334826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.917 [2024-12-16 10:07:21.334859] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37e20, cid 4, qid 0 00:20:22.917 [2024-12-16 10:07:21.334869] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37f80, cid 5, qid 0 00:20:22.917 [2024-12-16 10:07:21.334965] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.917 [2024-12-16 10:07:21.334987] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.917 [2024-12-16 10:07:21.334993] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.334998] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9eb510): datao=0, datal=1024, cccid=4 00:20:22.917 [2024-12-16 10:07:21.335004] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa37e20) on tqpair(0x9eb510): expected_datao=0, payload_size=1024 00:20:22.917 [2024-12-16 10:07:21.335022] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.335027] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.335035] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.917 [2024-12-16 10:07:21.335042] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.917 [2024-12-16 10:07:21.335047] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.335052] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37f80) on tqpair=0x9eb510 00:20:22.917 [2024-12-16 10:07:21.379390] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.917 [2024-12-16 10:07:21.379413] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.917 [2024-12-16 10:07:21.379420] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.379426] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37e20) on tqpair=0x9eb510 00:20:22.917 [2024-12-16 10:07:21.379445] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.379452] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.379457] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9eb510) 00:20:22.917 [2024-12-16 10:07:21.379468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.917 [2024-12-16 10:07:21.379522] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37e20, cid 4, qid 0 00:20:22.917 [2024-12-16 10:07:21.379599] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.917 [2024-12-16 10:07:21.379608] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.917 [2024-12-16 10:07:21.379613] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.379618] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9eb510): datao=0, datal=3072, cccid=4 00:20:22.917 [2024-12-16 10:07:21.379624] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa37e20) on tqpair(0x9eb510): expected_datao=0, payload_size=3072 00:20:22.917 [2024-12-16 10:07:21.379636] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.379641] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.379663] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.917 [2024-12-16 10:07:21.379671] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.917 [2024-12-16 10:07:21.379683] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.379689] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37e20) on tqpair=0x9eb510 00:20:22.917 [2024-12-16 10:07:21.379702] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.379716] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.917 [2024-12-16 10:07:21.379721] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9eb510) 00:20:22.917 [2024-12-16 10:07:21.379730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.917 [2024-12-16 10:07:21.379773] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37e20, cid 4, qid 0 00:20:22.917 [2024-12-16 10:07:21.379844] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.918 [2024-12-16 10:07:21.379853] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.918 [2024-12-16 10:07:21.379858] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.918 [2024-12-16 10:07:21.379863] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9eb510): datao=0, datal=8, cccid=4 00:20:22.918 [2024-12-16 10:07:21.379870] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa37e20) on tqpair(0x9eb510): expected_datao=0, payload_size=8 00:20:22.918 [2024-12-16 10:07:21.379879] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.918 [2024-12-16 10:07:21.379884] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.918 [2024-12-16 10:07:21.420445] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.918 [2024-12-16 10:07:21.420468] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.918 [2024-12-16 10:07:21.420475] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.918 [2024-12-16 10:07:21.420485] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37e20) on tqpair=0x9eb510 00:20:22.918 ===================================================== 00:20:22.918 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:22.918 ===================================================== 00:20:22.918 Controller Capabilities/Features 00:20:22.918 ================================ 00:20:22.918 Vendor ID: 0000 00:20:22.918 Subsystem Vendor ID: 0000 00:20:22.918 Serial Number: .................... 00:20:22.918 Model Number: ........................................ 00:20:22.918 Firmware Version: 24.01.1 00:20:22.918 Recommended Arb Burst: 0 00:20:22.918 IEEE OUI Identifier: 00 00 00 00:20:22.918 Multi-path I/O 00:20:22.918 May have multiple subsystem ports: No 00:20:22.918 May have multiple controllers: No 00:20:22.918 Associated with SR-IOV VF: No 00:20:22.918 Max Data Transfer Size: 131072 00:20:22.918 Max Number of Namespaces: 0 00:20:22.918 Max Number of I/O Queues: 1024 00:20:22.918 NVMe Specification Version (VS): 1.3 00:20:22.918 NVMe Specification Version (Identify): 1.3 00:20:22.918 Maximum Queue Entries: 128 00:20:22.918 Contiguous Queues Required: Yes 00:20:22.918 Arbitration Mechanisms Supported 00:20:22.918 Weighted Round Robin: Not Supported 00:20:22.918 Vendor Specific: Not Supported 00:20:22.918 Reset Timeout: 15000 ms 00:20:22.918 Doorbell Stride: 4 bytes 00:20:22.918 NVM Subsystem Reset: Not Supported 00:20:22.918 Command Sets Supported 00:20:22.918 NVM Command Set: Supported 00:20:22.918 Boot Partition: Not Supported 00:20:22.918 Memory Page Size Minimum: 4096 bytes 00:20:22.918 Memory Page Size Maximum: 4096 bytes 00:20:22.918 Persistent Memory Region: Not Supported 00:20:22.918 Optional Asynchronous Events Supported 00:20:22.918 Namespace Attribute Notices: Not Supported 00:20:22.918 Firmware Activation Notices: Not Supported 00:20:22.918 ANA Change Notices: Not Supported 00:20:22.918 PLE Aggregate Log Change Notices: Not Supported 00:20:22.918 LBA Status Info Alert Notices: Not Supported 00:20:22.918 EGE Aggregate Log Change Notices: Not Supported 00:20:22.918 Normal NVM Subsystem Shutdown event: Not Supported 00:20:22.918 Zone Descriptor Change Notices: Not Supported 00:20:22.918 Discovery Log Change Notices: Supported 00:20:22.918 Controller Attributes 00:20:22.918 128-bit Host Identifier: Not Supported 00:20:22.918 Non-Operational Permissive Mode: Not Supported 00:20:22.918 NVM Sets: Not Supported 00:20:22.918 Read Recovery Levels: Not Supported 00:20:22.918 Endurance Groups: Not Supported 00:20:22.918 Predictable Latency Mode: Not Supported 00:20:22.918 Traffic Based Keep ALive: Not Supported 00:20:22.918 Namespace Granularity: Not Supported 00:20:22.918 SQ Associations: Not Supported 00:20:22.918 UUID List: Not Supported 00:20:22.918 Multi-Domain Subsystem: Not Supported 00:20:22.918 Fixed Capacity Management: Not Supported 00:20:22.918 Variable Capacity Management: Not Supported 00:20:22.918 Delete Endurance Group: Not Supported 00:20:22.918 Delete NVM Set: Not Supported 00:20:22.918 Extended LBA Formats Supported: Not Supported 00:20:22.918 Flexible Data Placement Supported: Not Supported 00:20:22.918 00:20:22.918 Controller Memory Buffer Support 00:20:22.918 ================================ 00:20:22.918 Supported: No 00:20:22.918 00:20:22.918 Persistent Memory Region Support 00:20:22.918 ================================ 00:20:22.918 Supported: No 00:20:22.918 00:20:22.918 Admin Command Set Attributes 00:20:22.918 ============================ 00:20:22.918 Security Send/Receive: Not Supported 00:20:22.918 Format NVM: Not Supported 00:20:22.918 Firmware Activate/Download: Not Supported 00:20:22.918 Namespace Management: Not Supported 00:20:22.918 Device Self-Test: Not Supported 00:20:22.918 Directives: Not Supported 00:20:22.918 NVMe-MI: Not Supported 00:20:22.918 Virtualization Management: Not Supported 00:20:22.918 Doorbell Buffer Config: Not Supported 00:20:22.918 Get LBA Status Capability: Not Supported 00:20:22.918 Command & Feature Lockdown Capability: Not Supported 00:20:22.918 Abort Command Limit: 1 00:20:22.918 Async Event Request Limit: 4 00:20:22.918 Number of Firmware Slots: N/A 00:20:22.918 Firmware Slot 1 Read-Only: N/A 00:20:22.918 Firmware Activation Without Reset: N/A 00:20:22.918 Multiple Update Detection Support: N/A 00:20:22.918 Firmware Update Granularity: No Information Provided 00:20:22.918 Per-Namespace SMART Log: No 00:20:22.918 Asymmetric Namespace Access Log Page: Not Supported 00:20:22.918 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:22.918 Command Effects Log Page: Not Supported 00:20:22.918 Get Log Page Extended Data: Supported 00:20:22.918 Telemetry Log Pages: Not Supported 00:20:22.918 Persistent Event Log Pages: Not Supported 00:20:22.918 Supported Log Pages Log Page: May Support 00:20:22.918 Commands Supported & Effects Log Page: Not Supported 00:20:22.918 Feature Identifiers & Effects Log Page:May Support 00:20:22.918 NVMe-MI Commands & Effects Log Page: May Support 00:20:22.918 Data Area 4 for Telemetry Log: Not Supported 00:20:22.918 Error Log Page Entries Supported: 128 00:20:22.918 Keep Alive: Not Supported 00:20:22.918 00:20:22.918 NVM Command Set Attributes 00:20:22.918 ========================== 00:20:22.918 Submission Queue Entry Size 00:20:22.918 Max: 1 00:20:22.918 Min: 1 00:20:22.918 Completion Queue Entry Size 00:20:22.918 Max: 1 00:20:22.918 Min: 1 00:20:22.918 Number of Namespaces: 0 00:20:22.918 Compare Command: Not Supported 00:20:22.918 Write Uncorrectable Command: Not Supported 00:20:22.918 Dataset Management Command: Not Supported 00:20:22.918 Write Zeroes Command: Not Supported 00:20:22.918 Set Features Save Field: Not Supported 00:20:22.918 Reservations: Not Supported 00:20:22.918 Timestamp: Not Supported 00:20:22.918 Copy: Not Supported 00:20:22.918 Volatile Write Cache: Not Present 00:20:22.918 Atomic Write Unit (Normal): 1 00:20:22.918 Atomic Write Unit (PFail): 1 00:20:22.918 Atomic Compare & Write Unit: 1 00:20:22.918 Fused Compare & Write: Supported 00:20:22.918 Scatter-Gather List 00:20:22.918 SGL Command Set: Supported 00:20:22.918 SGL Keyed: Supported 00:20:22.918 SGL Bit Bucket Descriptor: Not Supported 00:20:22.918 SGL Metadata Pointer: Not Supported 00:20:22.918 Oversized SGL: Not Supported 00:20:22.918 SGL Metadata Address: Not Supported 00:20:22.918 SGL Offset: Supported 00:20:22.918 Transport SGL Data Block: Not Supported 00:20:22.918 Replay Protected Memory Block: Not Supported 00:20:22.918 00:20:22.918 Firmware Slot Information 00:20:22.918 ========================= 00:20:22.918 Active slot: 0 00:20:22.918 00:20:22.918 00:20:22.918 Error Log 00:20:22.918 ========= 00:20:22.918 00:20:22.918 Active Namespaces 00:20:22.918 ================= 00:20:22.918 Discovery Log Page 00:20:22.918 ================== 00:20:22.918 Generation Counter: 2 00:20:22.918 Number of Records: 2 00:20:22.918 Record Format: 0 00:20:22.918 00:20:22.918 Discovery Log Entry 0 00:20:22.918 ---------------------- 00:20:22.918 Transport Type: 3 (TCP) 00:20:22.918 Address Family: 1 (IPv4) 00:20:22.918 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:22.918 Entry Flags: 00:20:22.918 Duplicate Returned Information: 1 00:20:22.918 Explicit Persistent Connection Support for Discovery: 1 00:20:22.918 Transport Requirements: 00:20:22.918 Secure Channel: Not Required 00:20:22.918 Port ID: 0 (0x0000) 00:20:22.918 Controller ID: 65535 (0xffff) 00:20:22.918 Admin Max SQ Size: 128 00:20:22.918 Transport Service Identifier: 4420 00:20:22.918 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:22.918 Transport Address: 10.0.0.2 00:20:22.918 Discovery Log Entry 1 00:20:22.918 ---------------------- 00:20:22.918 Transport Type: 3 (TCP) 00:20:22.918 Address Family: 1 (IPv4) 00:20:22.919 Subsystem Type: 2 (NVM Subsystem) 00:20:22.919 Entry Flags: 00:20:22.919 Duplicate Returned Information: 0 00:20:22.919 Explicit Persistent Connection Support for Discovery: 0 00:20:22.919 Transport Requirements: 00:20:22.919 Secure Channel: Not Required 00:20:22.919 Port ID: 0 (0x0000) 00:20:22.919 Controller ID: 65535 (0xffff) 00:20:22.919 Admin Max SQ Size: 128 00:20:22.919 Transport Service Identifier: 4420 00:20:22.919 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:22.919 Transport Address: 10.0.0.2 [2024-12-16 10:07:21.420636] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:22.919 [2024-12-16 10:07:21.420671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.919 [2024-12-16 10:07:21.420681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.919 [2024-12-16 10:07:21.420690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.919 [2024-12-16 10:07:21.420698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.919 [2024-12-16 10:07:21.420714] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.420720] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.420725] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.919 [2024-12-16 10:07:21.420736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.919 [2024-12-16 10:07:21.420774] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.919 [2024-12-16 10:07:21.420859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.919 [2024-12-16 10:07:21.420869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.919 [2024-12-16 10:07:21.420873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.420879] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.919 [2024-12-16 10:07:21.420889] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.420895] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.420900] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.919 [2024-12-16 10:07:21.420910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.919 [2024-12-16 10:07:21.420947] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.919 [2024-12-16 10:07:21.421016] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.919 [2024-12-16 10:07:21.421025] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.919 [2024-12-16 10:07:21.421029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421035] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.919 [2024-12-16 10:07:21.421042] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:22.919 [2024-12-16 10:07:21.421048] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:22.919 [2024-12-16 10:07:21.421061] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421067] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421072] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.919 [2024-12-16 10:07:21.421082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.919 [2024-12-16 10:07:21.421107] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.919 [2024-12-16 10:07:21.421165] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.919 [2024-12-16 10:07:21.421174] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.919 [2024-12-16 10:07:21.421179] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421184] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.919 [2024-12-16 10:07:21.421199] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421205] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421210] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.919 [2024-12-16 10:07:21.421219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.919 [2024-12-16 10:07:21.421243] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.919 [2024-12-16 10:07:21.421296] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.919 [2024-12-16 10:07:21.421315] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.919 [2024-12-16 10:07:21.421320] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421325] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.919 [2024-12-16 10:07:21.421339] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421345] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421350] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.919 [2024-12-16 10:07:21.421376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.919 [2024-12-16 10:07:21.421403] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.919 [2024-12-16 10:07:21.421461] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.919 [2024-12-16 10:07:21.421470] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.919 [2024-12-16 10:07:21.421476] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421481] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.919 [2024-12-16 10:07:21.421494] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421500] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421505] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.919 [2024-12-16 10:07:21.421515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.919 [2024-12-16 10:07:21.421540] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.919 [2024-12-16 10:07:21.421603] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.919 [2024-12-16 10:07:21.421612] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.919 [2024-12-16 10:07:21.421616] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421622] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.919 [2024-12-16 10:07:21.421635] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421641] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421647] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.919 [2024-12-16 10:07:21.421656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.919 [2024-12-16 10:07:21.421680] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.919 [2024-12-16 10:07:21.421732] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.919 [2024-12-16 10:07:21.421741] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.919 [2024-12-16 10:07:21.421746] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421752] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.919 [2024-12-16 10:07:21.421765] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421771] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421776] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.919 [2024-12-16 10:07:21.421785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.919 [2024-12-16 10:07:21.421809] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.919 [2024-12-16 10:07:21.421863] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.919 [2024-12-16 10:07:21.421874] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.919 [2024-12-16 10:07:21.421879] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421884] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.919 [2024-12-16 10:07:21.421898] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421904] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.421909] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.919 [2024-12-16 10:07:21.421918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.919 [2024-12-16 10:07:21.421943] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.919 [2024-12-16 10:07:21.422037] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.919 [2024-12-16 10:07:21.422066] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.919 [2024-12-16 10:07:21.422072] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.422077] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.919 [2024-12-16 10:07:21.422092] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.422098] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.422112] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.919 [2024-12-16 10:07:21.422122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.919 [2024-12-16 10:07:21.422148] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.919 [2024-12-16 10:07:21.422203] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.919 [2024-12-16 10:07:21.422211] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.919 [2024-12-16 10:07:21.422216] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.919 [2024-12-16 10:07:21.422221] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.920 [2024-12-16 10:07:21.422235] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422241] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422246] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.920 [2024-12-16 10:07:21.422255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.920 [2024-12-16 10:07:21.422278] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.920 [2024-12-16 10:07:21.422392] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.920 [2024-12-16 10:07:21.422403] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.920 [2024-12-16 10:07:21.422408] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422413] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.920 [2024-12-16 10:07:21.422427] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422434] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422439] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.920 [2024-12-16 10:07:21.422448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.920 [2024-12-16 10:07:21.422494] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.920 [2024-12-16 10:07:21.422550] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.920 [2024-12-16 10:07:21.422559] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.920 [2024-12-16 10:07:21.422564] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422569] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.920 [2024-12-16 10:07:21.422582] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422588] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422593] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.920 [2024-12-16 10:07:21.422603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.920 [2024-12-16 10:07:21.422627] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.920 [2024-12-16 10:07:21.422685] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.920 [2024-12-16 10:07:21.422694] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.920 [2024-12-16 10:07:21.422698] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422704] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.920 [2024-12-16 10:07:21.422717] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422723] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422728] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.920 [2024-12-16 10:07:21.422745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.920 [2024-12-16 10:07:21.422769] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.920 [2024-12-16 10:07:21.422823] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.920 [2024-12-16 10:07:21.422832] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.920 [2024-12-16 10:07:21.422836] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422842] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.920 [2024-12-16 10:07:21.422855] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422861] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422866] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.920 [2024-12-16 10:07:21.422875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.920 [2024-12-16 10:07:21.422899] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.920 [2024-12-16 10:07:21.422958] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.920 [2024-12-16 10:07:21.422966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.920 [2024-12-16 10:07:21.422971] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422976] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.920 [2024-12-16 10:07:21.422989] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.422995] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.423000] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.920 [2024-12-16 10:07:21.423010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.920 [2024-12-16 10:07:21.423034] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.920 [2024-12-16 10:07:21.423095] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.920 [2024-12-16 10:07:21.423104] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.920 [2024-12-16 10:07:21.423109] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.423114] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.920 [2024-12-16 10:07:21.423127] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.423133] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.423138] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.920 [2024-12-16 10:07:21.423148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.920 [2024-12-16 10:07:21.423171] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.920 [2024-12-16 10:07:21.423226] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.920 [2024-12-16 10:07:21.423241] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.920 [2024-12-16 10:07:21.423247] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.423252] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.920 [2024-12-16 10:07:21.423266] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.423272] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.423278] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.920 [2024-12-16 10:07:21.423287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.920 [2024-12-16 10:07:21.423311] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.920 [2024-12-16 10:07:21.427380] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.920 [2024-12-16 10:07:21.427439] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.920 [2024-12-16 10:07:21.427446] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.427452] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.920 [2024-12-16 10:07:21.427471] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.427477] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.427482] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9eb510) 00:20:22.920 [2024-12-16 10:07:21.427494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.920 [2024-12-16 10:07:21.427530] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa37cc0, cid 3, qid 0 00:20:22.920 [2024-12-16 10:07:21.427592] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.920 [2024-12-16 10:07:21.427601] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.920 [2024-12-16 10:07:21.427606] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.920 [2024-12-16 10:07:21.427621] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa37cc0) on tqpair=0x9eb510 00:20:22.920 [2024-12-16 10:07:21.427632] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:22.920 00:20:22.920 10:07:21 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:22.920 [2024-12-16 10:07:21.470114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:22.920 [2024-12-16 10:07:21.470168] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93537 ] 00:20:23.185 [2024-12-16 10:07:21.611436] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:23.185 [2024-12-16 10:07:21.615620] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:23.185 [2024-12-16 10:07:21.615644] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:23.185 [2024-12-16 10:07:21.615657] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:23.185 [2024-12-16 10:07:21.615667] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:23.185 [2024-12-16 10:07:21.615839] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:23.185 [2024-12-16 10:07:21.615940] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb38510 0 00:20:23.185 [2024-12-16 10:07:21.631579] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:23.185 [2024-12-16 10:07:21.631621] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:23.185 [2024-12-16 10:07:21.631626] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:23.185 [2024-12-16 10:07:21.631630] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:23.185 [2024-12-16 10:07:21.631703] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.185 [2024-12-16 10:07:21.631710] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.185 [2024-12-16 10:07:21.631713] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb38510) 00:20:23.185 [2024-12-16 10:07:21.631726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:23.185 [2024-12-16 10:07:21.631772] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb848a0, cid 0, qid 0 00:20:23.185 [2024-12-16 10:07:21.639575] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.185 [2024-12-16 10:07:21.639606] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.185 [2024-12-16 10:07:21.639611] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.185 [2024-12-16 10:07:21.639627] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb848a0) on tqpair=0xb38510 00:20:23.185 [2024-12-16 10:07:21.639636] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:23.185 [2024-12-16 10:07:21.639643] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:23.185 [2024-12-16 10:07:21.639662] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:23.185 [2024-12-16 10:07:21.639677] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.185 [2024-12-16 10:07:21.639682] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.185 [2024-12-16 10:07:21.639686] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb38510) 00:20:23.185 [2024-12-16 10:07:21.639706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.185 [2024-12-16 10:07:21.639748] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb848a0, cid 0, qid 0 00:20:23.185 [2024-12-16 10:07:21.639885] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.185 [2024-12-16 10:07:21.639898] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.185 [2024-12-16 10:07:21.639902] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.185 [2024-12-16 10:07:21.639906] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb848a0) on tqpair=0xb38510 00:20:23.185 [2024-12-16 10:07:21.639912] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:23.185 [2024-12-16 10:07:21.639920] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:23.185 [2024-12-16 10:07:21.639949] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.185 [2024-12-16 10:07:21.639953] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.185 [2024-12-16 10:07:21.639957] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb38510) 00:20:23.185 [2024-12-16 10:07:21.639964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.185 [2024-12-16 10:07:21.639986] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb848a0, cid 0, qid 0 00:20:23.185 [2024-12-16 10:07:21.640050] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.185 [2024-12-16 10:07:21.640057] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.185 [2024-12-16 10:07:21.640060] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.185 [2024-12-16 10:07:21.640064] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb848a0) on tqpair=0xb38510 00:20:23.185 [2024-12-16 10:07:21.640069] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:23.185 [2024-12-16 10:07:21.640092] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:23.185 [2024-12-16 10:07:21.640100] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.185 [2024-12-16 10:07:21.640104] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.185 [2024-12-16 10:07:21.640119] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb38510) 00:20:23.185 [2024-12-16 10:07:21.640126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.186 [2024-12-16 10:07:21.640146] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb848a0, cid 0, qid 0 00:20:23.186 [2024-12-16 10:07:21.640215] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.186 [2024-12-16 10:07:21.640222] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.186 [2024-12-16 10:07:21.640226] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.640230] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb848a0) on tqpair=0xb38510 00:20:23.186 [2024-12-16 10:07:21.640236] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:23.186 [2024-12-16 10:07:21.640246] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.640251] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.640255] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb38510) 00:20:23.186 [2024-12-16 10:07:21.640262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.186 [2024-12-16 10:07:21.640282] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb848a0, cid 0, qid 0 00:20:23.186 [2024-12-16 10:07:21.640358] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.186 [2024-12-16 10:07:21.640366] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.186 [2024-12-16 10:07:21.640369] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.640373] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb848a0) on tqpair=0xb38510 00:20:23.186 [2024-12-16 10:07:21.640378] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:23.186 [2024-12-16 10:07:21.640384] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:23.186 [2024-12-16 10:07:21.640392] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:23.186 [2024-12-16 10:07:21.640504] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:23.186 [2024-12-16 10:07:21.640511] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:23.186 [2024-12-16 10:07:21.640520] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.640525] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.640529] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb38510) 00:20:23.186 [2024-12-16 10:07:21.640537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.186 [2024-12-16 10:07:21.640591] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb848a0, cid 0, qid 0 00:20:23.186 [2024-12-16 10:07:21.640653] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.186 [2024-12-16 10:07:21.640660] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.186 [2024-12-16 10:07:21.640663] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.640667] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb848a0) on tqpair=0xb38510 00:20:23.186 [2024-12-16 10:07:21.640672] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:23.186 [2024-12-16 10:07:21.640697] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.640701] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.640705] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb38510) 00:20:23.186 [2024-12-16 10:07:21.640712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.186 [2024-12-16 10:07:21.640732] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb848a0, cid 0, qid 0 00:20:23.186 [2024-12-16 10:07:21.640802] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.186 [2024-12-16 10:07:21.640808] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.186 [2024-12-16 10:07:21.640812] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.640816] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb848a0) on tqpair=0xb38510 00:20:23.186 [2024-12-16 10:07:21.640821] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:23.186 [2024-12-16 10:07:21.640827] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:23.186 [2024-12-16 10:07:21.640835] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:23.186 [2024-12-16 10:07:21.640853] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:23.186 [2024-12-16 10:07:21.640874] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.640879] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.640882] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb38510) 00:20:23.186 [2024-12-16 10:07:21.640889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.186 [2024-12-16 10:07:21.640910] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb848a0, cid 0, qid 0 00:20:23.186 [2024-12-16 10:07:21.641015] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.186 [2024-12-16 10:07:21.641022] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.186 [2024-12-16 10:07:21.641025] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641029] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb38510): datao=0, datal=4096, cccid=0 00:20:23.186 [2024-12-16 10:07:21.641034] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb848a0) on tqpair(0xb38510): expected_datao=0, payload_size=4096 00:20:23.186 [2024-12-16 10:07:21.641043] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641047] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641055] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.186 [2024-12-16 10:07:21.641061] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.186 [2024-12-16 10:07:21.641065] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641069] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb848a0) on tqpair=0xb38510 00:20:23.186 [2024-12-16 10:07:21.641078] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:23.186 [2024-12-16 10:07:21.641085] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:23.186 [2024-12-16 10:07:21.641089] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:23.186 [2024-12-16 10:07:21.641094] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:23.186 [2024-12-16 10:07:21.641098] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:23.186 [2024-12-16 10:07:21.641108] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:23.186 [2024-12-16 10:07:21.641137] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:23.186 [2024-12-16 10:07:21.641145] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641164] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641167] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb38510) 00:20:23.186 [2024-12-16 10:07:21.641191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.186 [2024-12-16 10:07:21.641228] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb848a0, cid 0, qid 0 00:20:23.186 [2024-12-16 10:07:21.641290] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.186 [2024-12-16 10:07:21.641304] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.186 [2024-12-16 10:07:21.641308] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641312] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb848a0) on tqpair=0xb38510 00:20:23.186 [2024-12-16 10:07:21.641331] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641336] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641340] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb38510) 00:20:23.186 [2024-12-16 10:07:21.641358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.186 [2024-12-16 10:07:21.641364] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641368] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641372] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb38510) 00:20:23.186 [2024-12-16 10:07:21.641392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.186 [2024-12-16 10:07:21.641399] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641403] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641407] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb38510) 00:20:23.186 [2024-12-16 10:07:21.641413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.186 [2024-12-16 10:07:21.641419] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641423] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641427] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.186 [2024-12-16 10:07:21.641433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.186 [2024-12-16 10:07:21.641438] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:23.186 [2024-12-16 10:07:21.641453] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:23.186 [2024-12-16 10:07:21.641461] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.186 [2024-12-16 10:07:21.641465] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.641468] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb38510) 00:20:23.187 [2024-12-16 10:07:21.641475] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.187 [2024-12-16 10:07:21.641499] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb848a0, cid 0, qid 0 00:20:23.187 [2024-12-16 10:07:21.641521] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84a00, cid 1, qid 0 00:20:23.187 [2024-12-16 10:07:21.641526] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84b60, cid 2, qid 0 00:20:23.187 [2024-12-16 10:07:21.641531] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.187 [2024-12-16 10:07:21.641536] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84e20, cid 4, qid 0 00:20:23.187 [2024-12-16 10:07:21.641644] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.187 [2024-12-16 10:07:21.641650] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.187 [2024-12-16 10:07:21.641654] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.641657] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84e20) on tqpair=0xb38510 00:20:23.187 [2024-12-16 10:07:21.641663] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:23.187 [2024-12-16 10:07:21.641668] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:23.187 [2024-12-16 10:07:21.641676] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:23.187 [2024-12-16 10:07:21.641703] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:23.187 [2024-12-16 10:07:21.641710] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.641714] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.641719] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb38510) 00:20:23.187 [2024-12-16 10:07:21.641727] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.187 [2024-12-16 10:07:21.641747] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84e20, cid 4, qid 0 00:20:23.187 [2024-12-16 10:07:21.641813] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.187 [2024-12-16 10:07:21.641831] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.187 [2024-12-16 10:07:21.641835] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.641839] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84e20) on tqpair=0xb38510 00:20:23.187 [2024-12-16 10:07:21.641909] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:23.187 [2024-12-16 10:07:21.641929] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:23.187 [2024-12-16 10:07:21.641949] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.641954] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.641957] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb38510) 00:20:23.187 [2024-12-16 10:07:21.641965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.187 [2024-12-16 10:07:21.641986] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84e20, cid 4, qid 0 00:20:23.187 [2024-12-16 10:07:21.642089] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.187 [2024-12-16 10:07:21.642097] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.187 [2024-12-16 10:07:21.642101] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642105] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb38510): datao=0, datal=4096, cccid=4 00:20:23.187 [2024-12-16 10:07:21.642110] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb84e20) on tqpair(0xb38510): expected_datao=0, payload_size=4096 00:20:23.187 [2024-12-16 10:07:21.642119] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642123] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642132] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.187 [2024-12-16 10:07:21.642138] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.187 [2024-12-16 10:07:21.642142] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642146] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84e20) on tqpair=0xb38510 00:20:23.187 [2024-12-16 10:07:21.642164] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:23.187 [2024-12-16 10:07:21.642177] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:23.187 [2024-12-16 10:07:21.642188] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:23.187 [2024-12-16 10:07:21.642196] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642200] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642204] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb38510) 00:20:23.187 [2024-12-16 10:07:21.642212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.187 [2024-12-16 10:07:21.642235] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84e20, cid 4, qid 0 00:20:23.187 [2024-12-16 10:07:21.642354] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.187 [2024-12-16 10:07:21.642361] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.187 [2024-12-16 10:07:21.642365] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642383] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb38510): datao=0, datal=4096, cccid=4 00:20:23.187 [2024-12-16 10:07:21.642389] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb84e20) on tqpair(0xb38510): expected_datao=0, payload_size=4096 00:20:23.187 [2024-12-16 10:07:21.642398] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642402] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642412] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.187 [2024-12-16 10:07:21.642419] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.187 [2024-12-16 10:07:21.642424] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642428] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84e20) on tqpair=0xb38510 00:20:23.187 [2024-12-16 10:07:21.642445] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:23.187 [2024-12-16 10:07:21.642458] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:23.187 [2024-12-16 10:07:21.642466] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642482] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642487] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb38510) 00:20:23.187 [2024-12-16 10:07:21.642494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.187 [2024-12-16 10:07:21.642517] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84e20, cid 4, qid 0 00:20:23.187 [2024-12-16 10:07:21.642602] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.187 [2024-12-16 10:07:21.642608] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.187 [2024-12-16 10:07:21.642626] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642630] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb38510): datao=0, datal=4096, cccid=4 00:20:23.187 [2024-12-16 10:07:21.642635] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb84e20) on tqpair(0xb38510): expected_datao=0, payload_size=4096 00:20:23.187 [2024-12-16 10:07:21.642657] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642661] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642669] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.187 [2024-12-16 10:07:21.642675] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.187 [2024-12-16 10:07:21.642678] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642682] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84e20) on tqpair=0xb38510 00:20:23.187 [2024-12-16 10:07:21.642691] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:23.187 [2024-12-16 10:07:21.642699] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:23.187 [2024-12-16 10:07:21.642710] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:23.187 [2024-12-16 10:07:21.642722] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:23.187 [2024-12-16 10:07:21.642727] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:23.187 [2024-12-16 10:07:21.642732] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:23.187 [2024-12-16 10:07:21.642737] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:23.187 [2024-12-16 10:07:21.642743] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:23.187 [2024-12-16 10:07:21.642758] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642766] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb38510) 00:20:23.187 [2024-12-16 10:07:21.642773] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.187 [2024-12-16 10:07:21.642780] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.187 [2024-12-16 10:07:21.642783] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.642787] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb38510) 00:20:23.188 [2024-12-16 10:07:21.642794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.188 [2024-12-16 10:07:21.642819] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84e20, cid 4, qid 0 00:20:23.188 [2024-12-16 10:07:21.642826] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84f80, cid 5, qid 0 00:20:23.188 [2024-12-16 10:07:21.642901] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.188 [2024-12-16 10:07:21.642907] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.188 [2024-12-16 10:07:21.642910] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.642914] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84e20) on tqpair=0xb38510 00:20:23.188 [2024-12-16 10:07:21.642936] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.188 [2024-12-16 10:07:21.642942] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.188 [2024-12-16 10:07:21.642945] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.642949] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84f80) on tqpair=0xb38510 00:20:23.188 [2024-12-16 10:07:21.642959] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.642963] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.642968] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb38510) 00:20:23.188 [2024-12-16 10:07:21.642975] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.188 [2024-12-16 10:07:21.642993] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84f80, cid 5, qid 0 00:20:23.188 [2024-12-16 10:07:21.643068] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.188 [2024-12-16 10:07:21.643074] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.188 [2024-12-16 10:07:21.643078] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.643081] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84f80) on tqpair=0xb38510 00:20:23.188 [2024-12-16 10:07:21.643092] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.643096] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.643100] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb38510) 00:20:23.188 [2024-12-16 10:07:21.643106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.188 [2024-12-16 10:07:21.643125] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84f80, cid 5, qid 0 00:20:23.188 [2024-12-16 10:07:21.643194] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.188 [2024-12-16 10:07:21.643201] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.188 [2024-12-16 10:07:21.643205] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.643209] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84f80) on tqpair=0xb38510 00:20:23.188 [2024-12-16 10:07:21.643219] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.643224] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.643228] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb38510) 00:20:23.188 [2024-12-16 10:07:21.643235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.188 [2024-12-16 10:07:21.643253] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84f80, cid 5, qid 0 00:20:23.188 [2024-12-16 10:07:21.643312] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.188 [2024-12-16 10:07:21.643319] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.188 [2024-12-16 10:07:21.643323] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.643327] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84f80) on tqpair=0xb38510 00:20:23.188 [2024-12-16 10:07:21.643341] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.643345] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.643349] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb38510) 00:20:23.188 [2024-12-16 10:07:21.643357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.188 [2024-12-16 10:07:21.643365] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.643369] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.643372] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb38510) 00:20:23.188 [2024-12-16 10:07:21.643379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.188 [2024-12-16 10:07:21.643388] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.643392] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.643396] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb38510) 00:20:23.188 [2024-12-16 10:07:21.643403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.188 [2024-12-16 10:07:21.643411] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.643415] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.643419] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb38510) 00:20:23.188 [2024-12-16 10:07:21.643425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.188 [2024-12-16 10:07:21.643447] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84f80, cid 5, qid 0 00:20:23.188 [2024-12-16 10:07:21.647546] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84e20, cid 4, qid 0 00:20:23.188 [2024-12-16 10:07:21.647569] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb850e0, cid 6, qid 0 00:20:23.188 [2024-12-16 10:07:21.647575] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb85240, cid 7, qid 0 00:20:23.188 [2024-12-16 10:07:21.647605] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.188 [2024-12-16 10:07:21.647612] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.188 [2024-12-16 10:07:21.647616] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647619] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb38510): datao=0, datal=8192, cccid=5 00:20:23.188 [2024-12-16 10:07:21.647625] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb84f80) on tqpair(0xb38510): expected_datao=0, payload_size=8192 00:20:23.188 [2024-12-16 10:07:21.647633] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647637] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.188 [2024-12-16 10:07:21.647648] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.188 [2024-12-16 10:07:21.647651] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647654] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb38510): datao=0, datal=512, cccid=4 00:20:23.188 [2024-12-16 10:07:21.647659] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb84e20) on tqpair(0xb38510): expected_datao=0, payload_size=512 00:20:23.188 [2024-12-16 10:07:21.647665] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647669] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647674] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.188 [2024-12-16 10:07:21.647679] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.188 [2024-12-16 10:07:21.647697] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647700] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb38510): datao=0, datal=512, cccid=6 00:20:23.188 [2024-12-16 10:07:21.647704] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb850e0) on tqpair(0xb38510): expected_datao=0, payload_size=512 00:20:23.188 [2024-12-16 10:07:21.647710] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647714] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647719] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.188 [2024-12-16 10:07:21.647724] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.188 [2024-12-16 10:07:21.647727] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647730] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb38510): datao=0, datal=4096, cccid=7 00:20:23.188 [2024-12-16 10:07:21.647734] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb85240) on tqpair(0xb38510): expected_datao=0, payload_size=4096 00:20:23.188 [2024-12-16 10:07:21.647740] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647753] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.188 [2024-12-16 10:07:21.647763] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.188 [2024-12-16 10:07:21.647767] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647770] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84f80) on tqpair=0xb38510 00:20:23.188 [2024-12-16 10:07:21.647788] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.188 [2024-12-16 10:07:21.647795] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.188 [2024-12-16 10:07:21.647798] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647802] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84e20) on tqpair=0xb38510 00:20:23.188 [2024-12-16 10:07:21.647811] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.188 [2024-12-16 10:07:21.647817] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.188 [2024-12-16 10:07:21.647820] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647823] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb850e0) on tqpair=0xb38510 00:20:23.188 [2024-12-16 10:07:21.647830] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.188 [2024-12-16 10:07:21.647835] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.188 [2024-12-16 10:07:21.647838] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.188 [2024-12-16 10:07:21.647842] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb85240) on tqpair=0xb38510 00:20:23.188 ===================================================== 00:20:23.188 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:23.188 ===================================================== 00:20:23.188 Controller Capabilities/Features 00:20:23.188 ================================ 00:20:23.188 Vendor ID: 8086 00:20:23.188 Subsystem Vendor ID: 8086 00:20:23.189 Serial Number: SPDK00000000000001 00:20:23.189 Model Number: SPDK bdev Controller 00:20:23.189 Firmware Version: 24.01.1 00:20:23.189 Recommended Arb Burst: 6 00:20:23.189 IEEE OUI Identifier: e4 d2 5c 00:20:23.189 Multi-path I/O 00:20:23.189 May have multiple subsystem ports: Yes 00:20:23.189 May have multiple controllers: Yes 00:20:23.189 Associated with SR-IOV VF: No 00:20:23.189 Max Data Transfer Size: 131072 00:20:23.189 Max Number of Namespaces: 32 00:20:23.189 Max Number of I/O Queues: 127 00:20:23.189 NVMe Specification Version (VS): 1.3 00:20:23.189 NVMe Specification Version (Identify): 1.3 00:20:23.189 Maximum Queue Entries: 128 00:20:23.189 Contiguous Queues Required: Yes 00:20:23.189 Arbitration Mechanisms Supported 00:20:23.189 Weighted Round Robin: Not Supported 00:20:23.189 Vendor Specific: Not Supported 00:20:23.189 Reset Timeout: 15000 ms 00:20:23.189 Doorbell Stride: 4 bytes 00:20:23.189 NVM Subsystem Reset: Not Supported 00:20:23.189 Command Sets Supported 00:20:23.189 NVM Command Set: Supported 00:20:23.189 Boot Partition: Not Supported 00:20:23.189 Memory Page Size Minimum: 4096 bytes 00:20:23.189 Memory Page Size Maximum: 4096 bytes 00:20:23.189 Persistent Memory Region: Not Supported 00:20:23.189 Optional Asynchronous Events Supported 00:20:23.189 Namespace Attribute Notices: Supported 00:20:23.189 Firmware Activation Notices: Not Supported 00:20:23.189 ANA Change Notices: Not Supported 00:20:23.189 PLE Aggregate Log Change Notices: Not Supported 00:20:23.189 LBA Status Info Alert Notices: Not Supported 00:20:23.189 EGE Aggregate Log Change Notices: Not Supported 00:20:23.189 Normal NVM Subsystem Shutdown event: Not Supported 00:20:23.189 Zone Descriptor Change Notices: Not Supported 00:20:23.189 Discovery Log Change Notices: Not Supported 00:20:23.189 Controller Attributes 00:20:23.189 128-bit Host Identifier: Supported 00:20:23.189 Non-Operational Permissive Mode: Not Supported 00:20:23.189 NVM Sets: Not Supported 00:20:23.189 Read Recovery Levels: Not Supported 00:20:23.189 Endurance Groups: Not Supported 00:20:23.189 Predictable Latency Mode: Not Supported 00:20:23.189 Traffic Based Keep ALive: Not Supported 00:20:23.189 Namespace Granularity: Not Supported 00:20:23.189 SQ Associations: Not Supported 00:20:23.189 UUID List: Not Supported 00:20:23.189 Multi-Domain Subsystem: Not Supported 00:20:23.189 Fixed Capacity Management: Not Supported 00:20:23.189 Variable Capacity Management: Not Supported 00:20:23.189 Delete Endurance Group: Not Supported 00:20:23.189 Delete NVM Set: Not Supported 00:20:23.189 Extended LBA Formats Supported: Not Supported 00:20:23.189 Flexible Data Placement Supported: Not Supported 00:20:23.189 00:20:23.189 Controller Memory Buffer Support 00:20:23.189 ================================ 00:20:23.189 Supported: No 00:20:23.189 00:20:23.189 Persistent Memory Region Support 00:20:23.189 ================================ 00:20:23.189 Supported: No 00:20:23.189 00:20:23.189 Admin Command Set Attributes 00:20:23.189 ============================ 00:20:23.189 Security Send/Receive: Not Supported 00:20:23.189 Format NVM: Not Supported 00:20:23.189 Firmware Activate/Download: Not Supported 00:20:23.189 Namespace Management: Not Supported 00:20:23.189 Device Self-Test: Not Supported 00:20:23.189 Directives: Not Supported 00:20:23.189 NVMe-MI: Not Supported 00:20:23.189 Virtualization Management: Not Supported 00:20:23.189 Doorbell Buffer Config: Not Supported 00:20:23.189 Get LBA Status Capability: Not Supported 00:20:23.189 Command & Feature Lockdown Capability: Not Supported 00:20:23.189 Abort Command Limit: 4 00:20:23.189 Async Event Request Limit: 4 00:20:23.189 Number of Firmware Slots: N/A 00:20:23.189 Firmware Slot 1 Read-Only: N/A 00:20:23.189 Firmware Activation Without Reset: N/A 00:20:23.189 Multiple Update Detection Support: N/A 00:20:23.189 Firmware Update Granularity: No Information Provided 00:20:23.189 Per-Namespace SMART Log: No 00:20:23.189 Asymmetric Namespace Access Log Page: Not Supported 00:20:23.189 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:23.189 Command Effects Log Page: Supported 00:20:23.189 Get Log Page Extended Data: Supported 00:20:23.189 Telemetry Log Pages: Not Supported 00:20:23.189 Persistent Event Log Pages: Not Supported 00:20:23.189 Supported Log Pages Log Page: May Support 00:20:23.189 Commands Supported & Effects Log Page: Not Supported 00:20:23.189 Feature Identifiers & Effects Log Page:May Support 00:20:23.189 NVMe-MI Commands & Effects Log Page: May Support 00:20:23.189 Data Area 4 for Telemetry Log: Not Supported 00:20:23.189 Error Log Page Entries Supported: 128 00:20:23.189 Keep Alive: Supported 00:20:23.189 Keep Alive Granularity: 10000 ms 00:20:23.189 00:20:23.189 NVM Command Set Attributes 00:20:23.189 ========================== 00:20:23.189 Submission Queue Entry Size 00:20:23.189 Max: 64 00:20:23.189 Min: 64 00:20:23.189 Completion Queue Entry Size 00:20:23.189 Max: 16 00:20:23.189 Min: 16 00:20:23.189 Number of Namespaces: 32 00:20:23.189 Compare Command: Supported 00:20:23.189 Write Uncorrectable Command: Not Supported 00:20:23.189 Dataset Management Command: Supported 00:20:23.189 Write Zeroes Command: Supported 00:20:23.189 Set Features Save Field: Not Supported 00:20:23.189 Reservations: Supported 00:20:23.189 Timestamp: Not Supported 00:20:23.189 Copy: Supported 00:20:23.189 Volatile Write Cache: Present 00:20:23.189 Atomic Write Unit (Normal): 1 00:20:23.189 Atomic Write Unit (PFail): 1 00:20:23.189 Atomic Compare & Write Unit: 1 00:20:23.189 Fused Compare & Write: Supported 00:20:23.189 Scatter-Gather List 00:20:23.189 SGL Command Set: Supported 00:20:23.189 SGL Keyed: Supported 00:20:23.189 SGL Bit Bucket Descriptor: Not Supported 00:20:23.189 SGL Metadata Pointer: Not Supported 00:20:23.189 Oversized SGL: Not Supported 00:20:23.189 SGL Metadata Address: Not Supported 00:20:23.189 SGL Offset: Supported 00:20:23.189 Transport SGL Data Block: Not Supported 00:20:23.189 Replay Protected Memory Block: Not Supported 00:20:23.189 00:20:23.189 Firmware Slot Information 00:20:23.189 ========================= 00:20:23.189 Active slot: 1 00:20:23.189 Slot 1 Firmware Revision: 24.01.1 00:20:23.189 00:20:23.189 00:20:23.189 Commands Supported and Effects 00:20:23.189 ============================== 00:20:23.189 Admin Commands 00:20:23.189 -------------- 00:20:23.189 Get Log Page (02h): Supported 00:20:23.189 Identify (06h): Supported 00:20:23.189 Abort (08h): Supported 00:20:23.189 Set Features (09h): Supported 00:20:23.189 Get Features (0Ah): Supported 00:20:23.189 Asynchronous Event Request (0Ch): Supported 00:20:23.189 Keep Alive (18h): Supported 00:20:23.189 I/O Commands 00:20:23.189 ------------ 00:20:23.189 Flush (00h): Supported LBA-Change 00:20:23.189 Write (01h): Supported LBA-Change 00:20:23.189 Read (02h): Supported 00:20:23.189 Compare (05h): Supported 00:20:23.189 Write Zeroes (08h): Supported LBA-Change 00:20:23.189 Dataset Management (09h): Supported LBA-Change 00:20:23.189 Copy (19h): Supported LBA-Change 00:20:23.189 Unknown (79h): Supported LBA-Change 00:20:23.189 Unknown (7Ah): Supported 00:20:23.189 00:20:23.189 Error Log 00:20:23.189 ========= 00:20:23.189 00:20:23.189 Arbitration 00:20:23.189 =========== 00:20:23.189 Arbitration Burst: 1 00:20:23.189 00:20:23.189 Power Management 00:20:23.189 ================ 00:20:23.189 Number of Power States: 1 00:20:23.189 Current Power State: Power State #0 00:20:23.189 Power State #0: 00:20:23.189 Max Power: 0.00 W 00:20:23.189 Non-Operational State: Operational 00:20:23.189 Entry Latency: Not Reported 00:20:23.189 Exit Latency: Not Reported 00:20:23.189 Relative Read Throughput: 0 00:20:23.189 Relative Read Latency: 0 00:20:23.189 Relative Write Throughput: 0 00:20:23.189 Relative Write Latency: 0 00:20:23.189 Idle Power: Not Reported 00:20:23.189 Active Power: Not Reported 00:20:23.189 Non-Operational Permissive Mode: Not Supported 00:20:23.189 00:20:23.189 Health Information 00:20:23.189 ================== 00:20:23.189 Critical Warnings: 00:20:23.189 Available Spare Space: OK 00:20:23.189 Temperature: OK 00:20:23.189 Device Reliability: OK 00:20:23.189 Read Only: No 00:20:23.189 Volatile Memory Backup: OK 00:20:23.189 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:23.189 Temperature Threshold: [2024-12-16 10:07:21.647986] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.189 [2024-12-16 10:07:21.647994] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.189 [2024-12-16 10:07:21.647998] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb38510) 00:20:23.189 [2024-12-16 10:07:21.648021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.189 [2024-12-16 10:07:21.648062] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb85240, cid 7, qid 0 00:20:23.189 [2024-12-16 10:07:21.648140] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.189 [2024-12-16 10:07:21.648146] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.190 [2024-12-16 10:07:21.648150] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648154] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb85240) on tqpair=0xb38510 00:20:23.190 [2024-12-16 10:07:21.648220] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:23.190 [2024-12-16 10:07:21.648243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.190 [2024-12-16 10:07:21.648252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.190 [2024-12-16 10:07:21.648258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.190 [2024-12-16 10:07:21.648265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.190 [2024-12-16 10:07:21.648275] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648279] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648283] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.190 [2024-12-16 10:07:21.648291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.190 [2024-12-16 10:07:21.648316] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.190 [2024-12-16 10:07:21.648388] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.190 [2024-12-16 10:07:21.648396] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.190 [2024-12-16 10:07:21.648401] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648405] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.190 [2024-12-16 10:07:21.648413] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648418] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648422] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.190 [2024-12-16 10:07:21.648430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.190 [2024-12-16 10:07:21.648456] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.190 [2024-12-16 10:07:21.648534] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.190 [2024-12-16 10:07:21.648541] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.190 [2024-12-16 10:07:21.648545] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648549] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.190 [2024-12-16 10:07:21.648569] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:23.190 [2024-12-16 10:07:21.648574] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:23.190 [2024-12-16 10:07:21.648600] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648605] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648609] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.190 [2024-12-16 10:07:21.648617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.190 [2024-12-16 10:07:21.648636] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.190 [2024-12-16 10:07:21.648699] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.190 [2024-12-16 10:07:21.648714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.190 [2024-12-16 10:07:21.648719] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648723] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.190 [2024-12-16 10:07:21.648735] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648740] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648744] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.190 [2024-12-16 10:07:21.648752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.190 [2024-12-16 10:07:21.648773] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.190 [2024-12-16 10:07:21.648829] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.190 [2024-12-16 10:07:21.648836] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.190 [2024-12-16 10:07:21.648840] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648844] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.190 [2024-12-16 10:07:21.648855] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648859] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648864] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.190 [2024-12-16 10:07:21.648871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.190 [2024-12-16 10:07:21.648891] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.190 [2024-12-16 10:07:21.648943] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.190 [2024-12-16 10:07:21.648951] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.190 [2024-12-16 10:07:21.648954] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648959] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.190 [2024-12-16 10:07:21.648970] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648975] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.648978] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.190 [2024-12-16 10:07:21.648986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.190 [2024-12-16 10:07:21.649019] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.190 [2024-12-16 10:07:21.649079] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.190 [2024-12-16 10:07:21.649107] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.190 [2024-12-16 10:07:21.649111] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.649115] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.190 [2024-12-16 10:07:21.649126] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.649131] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.649134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.190 [2024-12-16 10:07:21.649142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.190 [2024-12-16 10:07:21.649161] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.190 [2024-12-16 10:07:21.649217] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.190 [2024-12-16 10:07:21.649229] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.190 [2024-12-16 10:07:21.649234] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.649238] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.190 [2024-12-16 10:07:21.649249] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.649254] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.649258] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.190 [2024-12-16 10:07:21.649266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.190 [2024-12-16 10:07:21.649286] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.190 [2024-12-16 10:07:21.649341] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.190 [2024-12-16 10:07:21.649348] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.190 [2024-12-16 10:07:21.649363] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.649368] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.190 [2024-12-16 10:07:21.649380] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.649384] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.190 [2024-12-16 10:07:21.649388] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.190 [2024-12-16 10:07:21.649396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.190 [2024-12-16 10:07:21.649418] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.190 [2024-12-16 10:07:21.649483] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.190 [2024-12-16 10:07:21.649491] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.190 [2024-12-16 10:07:21.649494] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.649498] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.191 [2024-12-16 10:07:21.649509] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.649514] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.649518] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.191 [2024-12-16 10:07:21.649525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.191 [2024-12-16 10:07:21.649570] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.191 [2024-12-16 10:07:21.649643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.191 [2024-12-16 10:07:21.649654] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.191 [2024-12-16 10:07:21.649658] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.649662] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.191 [2024-12-16 10:07:21.649673] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.649678] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.649682] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.191 [2024-12-16 10:07:21.649689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.191 [2024-12-16 10:07:21.649710] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.191 [2024-12-16 10:07:21.649808] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.191 [2024-12-16 10:07:21.649822] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.191 [2024-12-16 10:07:21.649826] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.649830] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.191 [2024-12-16 10:07:21.649840] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.649844] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.649848] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.191 [2024-12-16 10:07:21.649855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.191 [2024-12-16 10:07:21.649873] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.191 [2024-12-16 10:07:21.649932] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.191 [2024-12-16 10:07:21.649943] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.191 [2024-12-16 10:07:21.649947] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.649950] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.191 [2024-12-16 10:07:21.649960] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.649965] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.649968] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.191 [2024-12-16 10:07:21.649975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.191 [2024-12-16 10:07:21.650036] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.191 [2024-12-16 10:07:21.650095] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.191 [2024-12-16 10:07:21.650102] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.191 [2024-12-16 10:07:21.650106] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650110] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.191 [2024-12-16 10:07:21.650121] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650126] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650130] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.191 [2024-12-16 10:07:21.650137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.191 [2024-12-16 10:07:21.650156] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.191 [2024-12-16 10:07:21.650212] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.191 [2024-12-16 10:07:21.650219] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.191 [2024-12-16 10:07:21.650223] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650227] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.191 [2024-12-16 10:07:21.650238] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650242] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650246] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.191 [2024-12-16 10:07:21.650254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.191 [2024-12-16 10:07:21.650273] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.191 [2024-12-16 10:07:21.650348] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.191 [2024-12-16 10:07:21.650355] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.191 [2024-12-16 10:07:21.650358] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650387] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.191 [2024-12-16 10:07:21.650401] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650406] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650410] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.191 [2024-12-16 10:07:21.650417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.191 [2024-12-16 10:07:21.650439] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.191 [2024-12-16 10:07:21.650511] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.191 [2024-12-16 10:07:21.650518] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.191 [2024-12-16 10:07:21.650522] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650526] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.191 [2024-12-16 10:07:21.650551] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650555] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650559] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.191 [2024-12-16 10:07:21.650566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.191 [2024-12-16 10:07:21.650584] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.191 [2024-12-16 10:07:21.650661] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.191 [2024-12-16 10:07:21.650687] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.191 [2024-12-16 10:07:21.650691] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650695] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.191 [2024-12-16 10:07:21.650706] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650710] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650714] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.191 [2024-12-16 10:07:21.650721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.191 [2024-12-16 10:07:21.650750] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.191 [2024-12-16 10:07:21.650834] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.191 [2024-12-16 10:07:21.650840] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.191 [2024-12-16 10:07:21.650844] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650847] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.191 [2024-12-16 10:07:21.650857] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650862] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650865] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.191 [2024-12-16 10:07:21.650872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.191 [2024-12-16 10:07:21.650890] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.191 [2024-12-16 10:07:21.650961] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.191 [2024-12-16 10:07:21.650982] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.191 [2024-12-16 10:07:21.650985] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.650988] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.191 [2024-12-16 10:07:21.650998] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.651002] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.651005] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.191 [2024-12-16 10:07:21.651012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.191 [2024-12-16 10:07:21.651028] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.191 [2024-12-16 10:07:21.651085] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.191 [2024-12-16 10:07:21.651091] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.191 [2024-12-16 10:07:21.651094] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.651097] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.191 [2024-12-16 10:07:21.651106] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.651110] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.191 [2024-12-16 10:07:21.651114] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.191 [2024-12-16 10:07:21.651120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.191 [2024-12-16 10:07:21.651137] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.191 [2024-12-16 10:07:21.651211] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.191 [2024-12-16 10:07:21.651223] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.191 [2024-12-16 10:07:21.651227] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.192 [2024-12-16 10:07:21.651231] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.192 [2024-12-16 10:07:21.651243] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.192 [2024-12-16 10:07:21.651247] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.192 [2024-12-16 10:07:21.651251] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.192 [2024-12-16 10:07:21.651259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.192 [2024-12-16 10:07:21.651278] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.192 [2024-12-16 10:07:21.651335] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.192 [2024-12-16 10:07:21.651342] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.192 [2024-12-16 10:07:21.651347] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.192 [2024-12-16 10:07:21.654396] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.192 [2024-12-16 10:07:21.654418] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.192 [2024-12-16 10:07:21.654424] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.192 [2024-12-16 10:07:21.654428] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb38510) 00:20:23.192 [2024-12-16 10:07:21.654436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.192 [2024-12-16 10:07:21.654463] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb84cc0, cid 3, qid 0 00:20:23.192 [2024-12-16 10:07:21.654527] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.192 [2024-12-16 10:07:21.654548] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.192 [2024-12-16 10:07:21.654552] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.192 [2024-12-16 10:07:21.654556] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb84cc0) on tqpair=0xb38510 00:20:23.192 [2024-12-16 10:07:21.654576] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:20:23.192 0 Kelvin (-273 Celsius) 00:20:23.192 Available Spare: 0% 00:20:23.192 Available Spare Threshold: 0% 00:20:23.192 Life Percentage Used: 0% 00:20:23.192 Data Units Read: 0 00:20:23.192 Data Units Written: 0 00:20:23.192 Host Read Commands: 0 00:20:23.192 Host Write Commands: 0 00:20:23.192 Controller Busy Time: 0 minutes 00:20:23.192 Power Cycles: 0 00:20:23.192 Power On Hours: 0 hours 00:20:23.192 Unsafe Shutdowns: 0 00:20:23.192 Unrecoverable Media Errors: 0 00:20:23.192 Lifetime Error Log Entries: 0 00:20:23.192 Warning Temperature Time: 0 minutes 00:20:23.192 Critical Temperature Time: 0 minutes 00:20:23.192 00:20:23.192 Number of Queues 00:20:23.192 ================ 00:20:23.192 Number of I/O Submission Queues: 127 00:20:23.192 Number of I/O Completion Queues: 127 00:20:23.192 00:20:23.192 Active Namespaces 00:20:23.192 ================= 00:20:23.192 Namespace ID:1 00:20:23.192 Error Recovery Timeout: Unlimited 00:20:23.192 Command Set Identifier: NVM (00h) 00:20:23.192 Deallocate: Supported 00:20:23.192 Deallocated/Unwritten Error: Not Supported 00:20:23.192 Deallocated Read Value: Unknown 00:20:23.192 Deallocate in Write Zeroes: Not Supported 00:20:23.192 Deallocated Guard Field: 0xFFFF 00:20:23.192 Flush: Supported 00:20:23.192 Reservation: Supported 00:20:23.192 Namespace Sharing Capabilities: Multiple Controllers 00:20:23.192 Size (in LBAs): 131072 (0GiB) 00:20:23.192 Capacity (in LBAs): 131072 (0GiB) 00:20:23.192 Utilization (in LBAs): 131072 (0GiB) 00:20:23.192 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:23.192 EUI64: ABCDEF0123456789 00:20:23.192 UUID: 393ec9b0-d759-4ca6-af3c-1bc8997ca994 00:20:23.192 Thin Provisioning: Not Supported 00:20:23.192 Per-NS Atomic Units: Yes 00:20:23.192 Atomic Boundary Size (Normal): 0 00:20:23.192 Atomic Boundary Size (PFail): 0 00:20:23.192 Atomic Boundary Offset: 0 00:20:23.192 Maximum Single Source Range Length: 65535 00:20:23.192 Maximum Copy Length: 65535 00:20:23.192 Maximum Source Range Count: 1 00:20:23.192 NGUID/EUI64 Never Reused: No 00:20:23.192 Namespace Write Protected: No 00:20:23.192 Number of LBA Formats: 1 00:20:23.192 Current LBA Format: LBA Format #00 00:20:23.192 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:23.192 00:20:23.192 10:07:21 -- host/identify.sh@51 -- # sync 00:20:23.192 10:07:21 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.192 10:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.192 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:20:23.192 10:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.192 10:07:21 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:23.192 10:07:21 -- host/identify.sh@56 -- # nvmftestfini 00:20:23.192 10:07:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:23.192 10:07:21 -- nvmf/common.sh@116 -- # sync 00:20:23.192 10:07:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:23.192 10:07:21 -- nvmf/common.sh@119 -- # set +e 00:20:23.192 10:07:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:23.192 10:07:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:23.192 rmmod nvme_tcp 00:20:23.192 rmmod nvme_fabrics 00:20:23.478 rmmod nvme_keyring 00:20:23.478 10:07:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:23.478 10:07:21 -- nvmf/common.sh@123 -- # set -e 00:20:23.478 10:07:21 -- nvmf/common.sh@124 -- # return 0 00:20:23.478 10:07:21 -- nvmf/common.sh@477 -- # '[' -n 93482 ']' 00:20:23.478 10:07:21 -- nvmf/common.sh@478 -- # killprocess 93482 00:20:23.478 10:07:21 -- common/autotest_common.sh@936 -- # '[' -z 93482 ']' 00:20:23.478 10:07:21 -- common/autotest_common.sh@940 -- # kill -0 93482 00:20:23.478 10:07:21 -- common/autotest_common.sh@941 -- # uname 00:20:23.478 10:07:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:23.478 10:07:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93482 00:20:23.478 10:07:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:23.478 10:07:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:23.478 10:07:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93482' 00:20:23.478 killing process with pid 93482 00:20:23.478 10:07:21 -- common/autotest_common.sh@955 -- # kill 93482 00:20:23.478 [2024-12-16 10:07:21.871566] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:23.478 10:07:21 -- common/autotest_common.sh@960 -- # wait 93482 00:20:23.740 10:07:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:23.740 10:07:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:23.740 10:07:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:23.740 10:07:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.740 10:07:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:23.740 10:07:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.740 10:07:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.740 10:07:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.740 10:07:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:23.740 ************************************ 00:20:23.740 END TEST nvmf_identify 00:20:23.740 ************************************ 00:20:23.740 00:20:23.740 real 0m2.894s 00:20:23.740 user 0m8.330s 00:20:23.740 sys 0m0.733s 00:20:23.740 10:07:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:23.740 10:07:22 -- common/autotest_common.sh@10 -- # set +x 00:20:23.740 10:07:22 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:23.740 10:07:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:23.741 10:07:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:23.741 10:07:22 -- common/autotest_common.sh@10 -- # set +x 00:20:23.741 ************************************ 00:20:23.741 START TEST nvmf_perf 00:20:23.741 ************************************ 00:20:23.741 10:07:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:24.000 * Looking for test storage... 00:20:24.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:24.000 10:07:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:24.000 10:07:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:24.000 10:07:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:24.000 10:07:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:24.000 10:07:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:24.000 10:07:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:24.000 10:07:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:24.000 10:07:22 -- scripts/common.sh@335 -- # IFS=.-: 00:20:24.000 10:07:22 -- scripts/common.sh@335 -- # read -ra ver1 00:20:24.000 10:07:22 -- scripts/common.sh@336 -- # IFS=.-: 00:20:24.000 10:07:22 -- scripts/common.sh@336 -- # read -ra ver2 00:20:24.000 10:07:22 -- scripts/common.sh@337 -- # local 'op=<' 00:20:24.000 10:07:22 -- scripts/common.sh@339 -- # ver1_l=2 00:20:24.000 10:07:22 -- scripts/common.sh@340 -- # ver2_l=1 00:20:24.000 10:07:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:24.000 10:07:22 -- scripts/common.sh@343 -- # case "$op" in 00:20:24.000 10:07:22 -- scripts/common.sh@344 -- # : 1 00:20:24.000 10:07:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:24.000 10:07:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:24.000 10:07:22 -- scripts/common.sh@364 -- # decimal 1 00:20:24.000 10:07:22 -- scripts/common.sh@352 -- # local d=1 00:20:24.000 10:07:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:24.000 10:07:22 -- scripts/common.sh@354 -- # echo 1 00:20:24.000 10:07:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:24.000 10:07:22 -- scripts/common.sh@365 -- # decimal 2 00:20:24.000 10:07:22 -- scripts/common.sh@352 -- # local d=2 00:20:24.000 10:07:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:24.000 10:07:22 -- scripts/common.sh@354 -- # echo 2 00:20:24.000 10:07:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:24.000 10:07:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:24.000 10:07:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:24.000 10:07:22 -- scripts/common.sh@367 -- # return 0 00:20:24.000 10:07:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:24.000 10:07:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:24.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.000 --rc genhtml_branch_coverage=1 00:20:24.000 --rc genhtml_function_coverage=1 00:20:24.000 --rc genhtml_legend=1 00:20:24.000 --rc geninfo_all_blocks=1 00:20:24.000 --rc geninfo_unexecuted_blocks=1 00:20:24.000 00:20:24.000 ' 00:20:24.000 10:07:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:24.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.000 --rc genhtml_branch_coverage=1 00:20:24.000 --rc genhtml_function_coverage=1 00:20:24.000 --rc genhtml_legend=1 00:20:24.000 --rc geninfo_all_blocks=1 00:20:24.000 --rc geninfo_unexecuted_blocks=1 00:20:24.000 00:20:24.000 ' 00:20:24.000 10:07:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:24.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.000 --rc genhtml_branch_coverage=1 00:20:24.000 --rc genhtml_function_coverage=1 00:20:24.000 --rc genhtml_legend=1 00:20:24.000 --rc geninfo_all_blocks=1 00:20:24.000 --rc geninfo_unexecuted_blocks=1 00:20:24.000 00:20:24.000 ' 00:20:24.000 10:07:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:24.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.000 --rc genhtml_branch_coverage=1 00:20:24.000 --rc genhtml_function_coverage=1 00:20:24.000 --rc genhtml_legend=1 00:20:24.000 --rc geninfo_all_blocks=1 00:20:24.000 --rc geninfo_unexecuted_blocks=1 00:20:24.000 00:20:24.000 ' 00:20:24.000 10:07:22 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:24.000 10:07:22 -- nvmf/common.sh@7 -- # uname -s 00:20:24.000 10:07:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.000 10:07:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.000 10:07:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.000 10:07:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.000 10:07:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.000 10:07:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.000 10:07:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.000 10:07:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.000 10:07:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.000 10:07:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.000 10:07:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:20:24.000 10:07:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:20:24.000 10:07:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.000 10:07:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.000 10:07:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:24.000 10:07:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:24.000 10:07:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.000 10:07:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.000 10:07:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.000 10:07:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.000 10:07:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.000 10:07:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.000 10:07:22 -- paths/export.sh@5 -- # export PATH 00:20:24.000 10:07:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.000 10:07:22 -- nvmf/common.sh@46 -- # : 0 00:20:24.000 10:07:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:24.000 10:07:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:24.000 10:07:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:24.000 10:07:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.000 10:07:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.000 10:07:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:24.000 10:07:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:24.001 10:07:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:24.001 10:07:22 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:24.001 10:07:22 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:24.001 10:07:22 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:24.001 10:07:22 -- host/perf.sh@17 -- # nvmftestinit 00:20:24.001 10:07:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:24.001 10:07:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.001 10:07:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:24.001 10:07:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:24.001 10:07:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:24.001 10:07:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.001 10:07:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.001 10:07:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.001 10:07:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:24.001 10:07:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:24.001 10:07:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:24.001 10:07:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:24.001 10:07:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:24.001 10:07:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:24.001 10:07:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.001 10:07:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.001 10:07:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:24.001 10:07:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:24.001 10:07:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:24.001 10:07:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:24.001 10:07:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:24.001 10:07:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.001 10:07:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:24.001 10:07:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:24.001 10:07:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:24.001 10:07:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:24.001 10:07:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:24.001 10:07:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:24.001 Cannot find device "nvmf_tgt_br" 00:20:24.001 10:07:22 -- nvmf/common.sh@154 -- # true 00:20:24.001 10:07:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:24.001 Cannot find device "nvmf_tgt_br2" 00:20:24.001 10:07:22 -- nvmf/common.sh@155 -- # true 00:20:24.001 10:07:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:24.001 10:07:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:24.001 Cannot find device "nvmf_tgt_br" 00:20:24.001 10:07:22 -- nvmf/common.sh@157 -- # true 00:20:24.001 10:07:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:24.001 Cannot find device "nvmf_tgt_br2" 00:20:24.001 10:07:22 -- nvmf/common.sh@158 -- # true 00:20:24.001 10:07:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:24.001 10:07:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:24.260 10:07:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:24.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.260 10:07:22 -- nvmf/common.sh@161 -- # true 00:20:24.260 10:07:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:24.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.260 10:07:22 -- nvmf/common.sh@162 -- # true 00:20:24.260 10:07:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:24.260 10:07:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:24.260 10:07:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:24.260 10:07:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:24.260 10:07:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:24.260 10:07:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:24.260 10:07:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:24.260 10:07:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:24.260 10:07:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:24.260 10:07:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:24.260 10:07:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:24.260 10:07:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:24.260 10:07:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:24.260 10:07:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:24.260 10:07:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:24.260 10:07:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:24.260 10:07:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:24.260 10:07:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:24.260 10:07:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:24.260 10:07:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:24.260 10:07:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:24.260 10:07:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:24.260 10:07:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:24.260 10:07:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:24.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:20:24.260 00:20:24.260 --- 10.0.0.2 ping statistics --- 00:20:24.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.260 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:24.260 10:07:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:24.260 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:24.260 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:24.260 00:20:24.260 --- 10.0.0.3 ping statistics --- 00:20:24.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.260 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:24.260 10:07:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:24.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:20:24.260 00:20:24.261 --- 10.0.0.1 ping statistics --- 00:20:24.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.261 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:20:24.261 10:07:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.261 10:07:22 -- nvmf/common.sh@421 -- # return 0 00:20:24.261 10:07:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:24.261 10:07:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.261 10:07:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:24.261 10:07:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:24.261 10:07:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.261 10:07:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:24.261 10:07:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:24.261 10:07:22 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:24.261 10:07:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:24.261 10:07:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:24.261 10:07:22 -- common/autotest_common.sh@10 -- # set +x 00:20:24.261 10:07:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:24.261 10:07:22 -- nvmf/common.sh@469 -- # nvmfpid=93720 00:20:24.261 10:07:22 -- nvmf/common.sh@470 -- # waitforlisten 93720 00:20:24.261 10:07:22 -- common/autotest_common.sh@829 -- # '[' -z 93720 ']' 00:20:24.261 10:07:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.261 10:07:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.261 10:07:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.261 10:07:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.261 10:07:22 -- common/autotest_common.sh@10 -- # set +x 00:20:24.520 [2024-12-16 10:07:22.931080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:24.520 [2024-12-16 10:07:22.931166] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.520 [2024-12-16 10:07:23.072648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.779 [2024-12-16 10:07:23.183151] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:24.779 [2024-12-16 10:07:23.183311] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.779 [2024-12-16 10:07:23.183325] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.779 [2024-12-16 10:07:23.183335] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.779 [2024-12-16 10:07:23.183535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.779 [2024-12-16 10:07:23.183718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.779 [2024-12-16 10:07:23.184254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.779 [2024-12-16 10:07:23.184312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.348 10:07:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.348 10:07:23 -- common/autotest_common.sh@862 -- # return 0 00:20:25.348 10:07:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:25.348 10:07:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:25.348 10:07:23 -- common/autotest_common.sh@10 -- # set +x 00:20:25.606 10:07:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.606 10:07:23 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:25.606 10:07:24 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:25.865 10:07:24 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:25.865 10:07:24 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:26.124 10:07:24 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:26.124 10:07:24 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:26.383 10:07:24 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:26.383 10:07:24 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:26.383 10:07:24 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:26.383 10:07:24 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:26.383 10:07:24 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:26.646 [2024-12-16 10:07:25.188838] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.646 10:07:25 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:26.911 10:07:25 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:26.911 10:07:25 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:27.169 10:07:25 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:27.169 10:07:25 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:27.428 10:07:25 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.687 [2024-12-16 10:07:26.216638] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.687 10:07:26 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:27.946 10:07:26 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:27.946 10:07:26 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:27.946 10:07:26 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:27.946 10:07:26 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:29.324 Initializing NVMe Controllers 00:20:29.324 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:29.324 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:29.324 Initialization complete. Launching workers. 00:20:29.324 ======================================================== 00:20:29.324 Latency(us) 00:20:29.324 Device Information : IOPS MiB/s Average min max 00:20:29.324 PCIE (0000:00:06.0) NSID 1 from core 0: 19931.40 77.86 1605.27 427.15 8365.91 00:20:29.324 ======================================================== 00:20:29.324 Total : 19931.40 77.86 1605.27 427.15 8365.91 00:20:29.324 00:20:29.324 10:07:27 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:30.260 Initializing NVMe Controllers 00:20:30.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:30.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:30.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:30.260 Initialization complete. Launching workers. 00:20:30.260 ======================================================== 00:20:30.260 Latency(us) 00:20:30.260 Device Information : IOPS MiB/s Average min max 00:20:30.260 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4129.86 16.13 242.80 97.01 4201.20 00:20:30.260 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8162.99 5065.63 12027.89 00:20:30.260 ======================================================== 00:20:30.260 Total : 4252.86 16.61 471.85 97.01 12027.89 00:20:30.260 00:20:30.260 10:07:28 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:31.637 [2024-12-16 10:07:30.114456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270470 is same with the state(5) to be set 00:20:31.637 [2024-12-16 10:07:30.114560] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270470 is same with the state(5) to be set 00:20:31.637 [2024-12-16 10:07:30.114587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270470 is same with the state(5) to be set 00:20:31.637 [2024-12-16 10:07:30.114595] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270470 is same with the state(5) to be set 00:20:31.637 [2024-12-16 10:07:30.114603] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270470 is same with the state(5) to be set 00:20:31.637 [2024-12-16 10:07:30.114610] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270470 is same with the state(5) to be set 00:20:31.637 [2024-12-16 10:07:30.114626] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270470 is same with the state(5) to be set 00:20:31.637 [2024-12-16 10:07:30.114634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270470 is same with the state(5) to be set 00:20:31.637 [2024-12-16 10:07:30.114650] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270470 is same with the state(5) to be set 00:20:31.637 [2024-12-16 10:07:30.114657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270470 is same with the state(5) to be set 00:20:31.637 [2024-12-16 10:07:30.114666] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270470 is same with the state(5) to be set 00:20:31.637 Initializing NVMe Controllers 00:20:31.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:31.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:31.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:31.637 Initialization complete. Launching workers. 00:20:31.637 ======================================================== 00:20:31.637 Latency(us) 00:20:31.637 Device Information : IOPS MiB/s Average min max 00:20:31.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10157.89 39.68 3152.80 609.05 7362.87 00:20:31.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2699.97 10.55 11962.29 5738.44 20345.25 00:20:31.637 ======================================================== 00:20:31.637 Total : 12857.86 50.23 5002.67 609.05 20345.25 00:20:31.637 00:20:31.637 10:07:30 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:31.637 10:07:30 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:34.172 Initializing NVMe Controllers 00:20:34.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.172 Controller IO queue size 128, less than required. 00:20:34.172 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.172 Controller IO queue size 128, less than required. 00:20:34.172 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:34.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:34.172 Initialization complete. Launching workers. 00:20:34.172 ======================================================== 00:20:34.172 Latency(us) 00:20:34.172 Device Information : IOPS MiB/s Average min max 00:20:34.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1587.44 396.86 82270.45 54141.97 144791.63 00:20:34.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 585.48 146.37 225383.49 96045.32 330883.90 00:20:34.172 ======================================================== 00:20:34.172 Total : 2172.92 543.23 120831.28 54141.97 330883.90 00:20:34.172 00:20:34.172 10:07:32 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:34.431 No valid NVMe controllers or AIO or URING devices found 00:20:34.431 Initializing NVMe Controllers 00:20:34.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.431 Controller IO queue size 128, less than required. 00:20:34.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.431 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:34.431 Controller IO queue size 128, less than required. 00:20:34.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.431 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:34.431 WARNING: Some requested NVMe devices were skipped 00:20:34.431 10:07:33 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:36.968 Initializing NVMe Controllers 00:20:36.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.968 Controller IO queue size 128, less than required. 00:20:36.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.968 Controller IO queue size 128, less than required. 00:20:36.968 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:36.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:36.968 Initialization complete. Launching workers. 00:20:36.968 00:20:36.968 ==================== 00:20:36.968 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:36.968 TCP transport: 00:20:36.968 polls: 13229 00:20:36.968 idle_polls: 9544 00:20:36.968 sock_completions: 3685 00:20:36.968 nvme_completions: 2411 00:20:36.968 submitted_requests: 3784 00:20:36.968 queued_requests: 1 00:20:36.968 00:20:36.968 ==================== 00:20:36.968 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:36.968 TCP transport: 00:20:36.968 polls: 8849 00:20:36.968 idle_polls: 5483 00:20:36.968 sock_completions: 3366 00:20:36.968 nvme_completions: 6541 00:20:36.968 submitted_requests: 9887 00:20:36.968 queued_requests: 1 00:20:36.968 ======================================================== 00:20:36.968 Latency(us) 00:20:36.968 Device Information : IOPS MiB/s Average min max 00:20:36.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 666.00 166.50 199508.47 119445.88 300465.72 00:20:36.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1697.22 424.31 75943.82 42922.64 124967.36 00:20:36.968 ======================================================== 00:20:36.968 Total : 2363.22 590.81 110766.58 42922.64 300465.72 00:20:36.968 00:20:36.968 10:07:35 -- host/perf.sh@66 -- # sync 00:20:37.227 10:07:35 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:37.486 10:07:35 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:37.486 10:07:35 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:37.486 10:07:35 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:37.745 10:07:36 -- host/perf.sh@72 -- # ls_guid=871de2bc-01f1-4b40-94e8-b46b67395fbe 00:20:37.745 10:07:36 -- host/perf.sh@73 -- # get_lvs_free_mb 871de2bc-01f1-4b40-94e8-b46b67395fbe 00:20:37.745 10:07:36 -- common/autotest_common.sh@1353 -- # local lvs_uuid=871de2bc-01f1-4b40-94e8-b46b67395fbe 00:20:37.745 10:07:36 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:37.745 10:07:36 -- common/autotest_common.sh@1355 -- # local fc 00:20:37.745 10:07:36 -- common/autotest_common.sh@1356 -- # local cs 00:20:37.745 10:07:36 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:38.004 10:07:36 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:38.004 { 00:20:38.004 "base_bdev": "Nvme0n1", 00:20:38.004 "block_size": 4096, 00:20:38.004 "cluster_size": 4194304, 00:20:38.004 "free_clusters": 1278, 00:20:38.004 "name": "lvs_0", 00:20:38.004 "total_data_clusters": 1278, 00:20:38.004 "uuid": "871de2bc-01f1-4b40-94e8-b46b67395fbe" 00:20:38.004 } 00:20:38.004 ]' 00:20:38.004 10:07:36 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="871de2bc-01f1-4b40-94e8-b46b67395fbe") .free_clusters' 00:20:38.004 10:07:36 -- common/autotest_common.sh@1358 -- # fc=1278 00:20:38.004 10:07:36 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="871de2bc-01f1-4b40-94e8-b46b67395fbe") .cluster_size' 00:20:38.004 5112 00:20:38.004 10:07:36 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:38.004 10:07:36 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:20:38.004 10:07:36 -- common/autotest_common.sh@1363 -- # echo 5112 00:20:38.004 10:07:36 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:38.004 10:07:36 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 871de2bc-01f1-4b40-94e8-b46b67395fbe lbd_0 5112 00:20:38.572 10:07:36 -- host/perf.sh@80 -- # lb_guid=fd226a48-5789-4c0b-bc92-a3287cafaf71 00:20:38.572 10:07:36 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore fd226a48-5789-4c0b-bc92-a3287cafaf71 lvs_n_0 00:20:38.832 10:07:37 -- host/perf.sh@83 -- # ls_nested_guid=7f901a45-8695-4ca3-8909-c597bac37ce4 00:20:38.832 10:07:37 -- host/perf.sh@84 -- # get_lvs_free_mb 7f901a45-8695-4ca3-8909-c597bac37ce4 00:20:38.832 10:07:37 -- common/autotest_common.sh@1353 -- # local lvs_uuid=7f901a45-8695-4ca3-8909-c597bac37ce4 00:20:38.832 10:07:37 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:38.832 10:07:37 -- common/autotest_common.sh@1355 -- # local fc 00:20:38.832 10:07:37 -- common/autotest_common.sh@1356 -- # local cs 00:20:38.832 10:07:37 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:39.092 10:07:37 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:39.092 { 00:20:39.092 "base_bdev": "Nvme0n1", 00:20:39.092 "block_size": 4096, 00:20:39.092 "cluster_size": 4194304, 00:20:39.092 "free_clusters": 0, 00:20:39.092 "name": "lvs_0", 00:20:39.092 "total_data_clusters": 1278, 00:20:39.092 "uuid": "871de2bc-01f1-4b40-94e8-b46b67395fbe" 00:20:39.092 }, 00:20:39.092 { 00:20:39.092 "base_bdev": "fd226a48-5789-4c0b-bc92-a3287cafaf71", 00:20:39.092 "block_size": 4096, 00:20:39.092 "cluster_size": 4194304, 00:20:39.092 "free_clusters": 1276, 00:20:39.092 "name": "lvs_n_0", 00:20:39.092 "total_data_clusters": 1276, 00:20:39.092 "uuid": "7f901a45-8695-4ca3-8909-c597bac37ce4" 00:20:39.092 } 00:20:39.092 ]' 00:20:39.092 10:07:37 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="7f901a45-8695-4ca3-8909-c597bac37ce4") .free_clusters' 00:20:39.092 10:07:37 -- common/autotest_common.sh@1358 -- # fc=1276 00:20:39.092 10:07:37 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="7f901a45-8695-4ca3-8909-c597bac37ce4") .cluster_size' 00:20:39.092 10:07:37 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:39.092 10:07:37 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:20:39.092 5104 00:20:39.092 10:07:37 -- common/autotest_common.sh@1363 -- # echo 5104 00:20:39.092 10:07:37 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:39.092 10:07:37 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7f901a45-8695-4ca3-8909-c597bac37ce4 lbd_nest_0 5104 00:20:39.351 10:07:37 -- host/perf.sh@88 -- # lb_nested_guid=55115e04-07c9-41c5-aba1-7bc7777f1289 00:20:39.351 10:07:37 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:39.610 10:07:38 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:39.610 10:07:38 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 55115e04-07c9-41c5-aba1-7bc7777f1289 00:20:39.868 10:07:38 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.127 10:07:38 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:40.127 10:07:38 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:40.127 10:07:38 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:40.127 10:07:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:40.127 10:07:38 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:40.385 No valid NVMe controllers or AIO or URING devices found 00:20:40.385 Initializing NVMe Controllers 00:20:40.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.385 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:40.385 WARNING: Some requested NVMe devices were skipped 00:20:40.385 10:07:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:40.385 10:07:38 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.591 Initializing NVMe Controllers 00:20:52.591 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:52.591 Initialization complete. Launching workers. 00:20:52.591 ======================================================== 00:20:52.591 Latency(us) 00:20:52.591 Device Information : IOPS MiB/s Average min max 00:20:52.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 830.61 103.83 1203.51 357.44 8717.94 00:20:52.591 ======================================================== 00:20:52.591 Total : 830.61 103.83 1203.51 357.44 8717.94 00:20:52.591 00:20:52.591 10:07:49 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:52.591 10:07:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:52.591 10:07:49 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.591 No valid NVMe controllers or AIO or URING devices found 00:20:52.591 Initializing NVMe Controllers 00:20:52.591 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.591 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:52.591 WARNING: Some requested NVMe devices were skipped 00:20:52.591 10:07:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:52.591 10:07:49 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:02.616 [2024-12-16 10:07:59.741460] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1402cf0 is same with the state(5) to be set 00:21:02.616 [2024-12-16 10:07:59.741536] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1402cf0 is same with the state(5) to be set 00:21:02.616 [2024-12-16 10:07:59.741548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1402cf0 is same with the state(5) to be set 00:21:02.616 [2024-12-16 10:07:59.741556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1402cf0 is same with the state(5) to be set 00:21:02.616 Initializing NVMe Controllers 00:21:02.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:02.616 Initialization complete. Launching workers. 00:21:02.616 ======================================================== 00:21:02.616 Latency(us) 00:21:02.616 Device Information : IOPS MiB/s Average min max 00:21:02.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1087.70 135.96 29460.40 6809.16 275078.97 00:21:02.616 ======================================================== 00:21:02.616 Total : 1087.70 135.96 29460.40 6809.16 275078.97 00:21:02.616 00:21:02.616 10:07:59 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:02.616 10:07:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:02.616 10:07:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:02.616 No valid NVMe controllers or AIO or URING devices found 00:21:02.616 Initializing NVMe Controllers 00:21:02.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.616 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:02.616 WARNING: Some requested NVMe devices were skipped 00:21:02.616 10:08:00 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:02.616 10:08:00 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.593 Initializing NVMe Controllers 00:21:12.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:12.593 Controller IO queue size 128, less than required. 00:21:12.593 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:12.593 Initialization complete. Launching workers. 00:21:12.593 ======================================================== 00:21:12.593 Latency(us) 00:21:12.593 Device Information : IOPS MiB/s Average min max 00:21:12.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3987.10 498.39 32164.21 8788.89 70197.07 00:21:12.593 ======================================================== 00:21:12.593 Total : 3987.10 498.39 32164.21 8788.89 70197.07 00:21:12.593 00:21:12.593 10:08:10 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.593 10:08:10 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 55115e04-07c9-41c5-aba1-7bc7777f1289 00:21:12.593 10:08:11 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:12.852 10:08:11 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fd226a48-5789-4c0b-bc92-a3287cafaf71 00:21:13.110 10:08:11 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:13.369 10:08:11 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:13.369 10:08:11 -- host/perf.sh@114 -- # nvmftestfini 00:21:13.369 10:08:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:13.369 10:08:11 -- nvmf/common.sh@116 -- # sync 00:21:13.369 10:08:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:13.369 10:08:11 -- nvmf/common.sh@119 -- # set +e 00:21:13.369 10:08:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:13.369 10:08:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:13.369 rmmod nvme_tcp 00:21:13.369 rmmod nvme_fabrics 00:21:13.369 rmmod nvme_keyring 00:21:13.369 10:08:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:13.369 10:08:11 -- nvmf/common.sh@123 -- # set -e 00:21:13.369 10:08:11 -- nvmf/common.sh@124 -- # return 0 00:21:13.369 10:08:11 -- nvmf/common.sh@477 -- # '[' -n 93720 ']' 00:21:13.369 10:08:11 -- nvmf/common.sh@478 -- # killprocess 93720 00:21:13.369 10:08:11 -- common/autotest_common.sh@936 -- # '[' -z 93720 ']' 00:21:13.369 10:08:11 -- common/autotest_common.sh@940 -- # kill -0 93720 00:21:13.369 10:08:11 -- common/autotest_common.sh@941 -- # uname 00:21:13.369 10:08:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:13.369 10:08:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93720 00:21:13.369 killing process with pid 93720 00:21:13.369 10:08:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:13.369 10:08:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:13.369 10:08:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93720' 00:21:13.369 10:08:11 -- common/autotest_common.sh@955 -- # kill 93720 00:21:13.369 10:08:11 -- common/autotest_common.sh@960 -- # wait 93720 00:21:14.745 10:08:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:14.745 10:08:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:14.745 10:08:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:14.745 10:08:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.745 10:08:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:14.745 10:08:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.745 10:08:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.745 10:08:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.745 10:08:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:14.745 00:21:14.745 real 0m51.009s 00:21:14.745 user 3m13.292s 00:21:14.745 sys 0m10.285s 00:21:14.745 ************************************ 00:21:14.745 END TEST nvmf_perf 00:21:14.745 ************************************ 00:21:14.745 10:08:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:14.745 10:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:14.745 10:08:13 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:14.745 10:08:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:14.745 10:08:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:14.745 10:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:15.004 ************************************ 00:21:15.004 START TEST nvmf_fio_host 00:21:15.004 ************************************ 00:21:15.004 10:08:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:15.004 * Looking for test storage... 00:21:15.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:15.004 10:08:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:15.004 10:08:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:15.004 10:08:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:15.004 10:08:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:15.004 10:08:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:15.004 10:08:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:15.004 10:08:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:15.004 10:08:13 -- scripts/common.sh@335 -- # IFS=.-: 00:21:15.004 10:08:13 -- scripts/common.sh@335 -- # read -ra ver1 00:21:15.004 10:08:13 -- scripts/common.sh@336 -- # IFS=.-: 00:21:15.004 10:08:13 -- scripts/common.sh@336 -- # read -ra ver2 00:21:15.004 10:08:13 -- scripts/common.sh@337 -- # local 'op=<' 00:21:15.004 10:08:13 -- scripts/common.sh@339 -- # ver1_l=2 00:21:15.004 10:08:13 -- scripts/common.sh@340 -- # ver2_l=1 00:21:15.004 10:08:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:15.004 10:08:13 -- scripts/common.sh@343 -- # case "$op" in 00:21:15.004 10:08:13 -- scripts/common.sh@344 -- # : 1 00:21:15.004 10:08:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:15.004 10:08:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.004 10:08:13 -- scripts/common.sh@364 -- # decimal 1 00:21:15.004 10:08:13 -- scripts/common.sh@352 -- # local d=1 00:21:15.004 10:08:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:15.004 10:08:13 -- scripts/common.sh@354 -- # echo 1 00:21:15.004 10:08:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:15.004 10:08:13 -- scripts/common.sh@365 -- # decimal 2 00:21:15.004 10:08:13 -- scripts/common.sh@352 -- # local d=2 00:21:15.004 10:08:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:15.004 10:08:13 -- scripts/common.sh@354 -- # echo 2 00:21:15.004 10:08:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:15.004 10:08:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:15.004 10:08:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:15.004 10:08:13 -- scripts/common.sh@367 -- # return 0 00:21:15.004 10:08:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:15.004 10:08:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:15.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.004 --rc genhtml_branch_coverage=1 00:21:15.004 --rc genhtml_function_coverage=1 00:21:15.004 --rc genhtml_legend=1 00:21:15.004 --rc geninfo_all_blocks=1 00:21:15.004 --rc geninfo_unexecuted_blocks=1 00:21:15.004 00:21:15.004 ' 00:21:15.004 10:08:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:15.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.004 --rc genhtml_branch_coverage=1 00:21:15.004 --rc genhtml_function_coverage=1 00:21:15.004 --rc genhtml_legend=1 00:21:15.004 --rc geninfo_all_blocks=1 00:21:15.004 --rc geninfo_unexecuted_blocks=1 00:21:15.004 00:21:15.004 ' 00:21:15.004 10:08:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:15.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.004 --rc genhtml_branch_coverage=1 00:21:15.004 --rc genhtml_function_coverage=1 00:21:15.004 --rc genhtml_legend=1 00:21:15.004 --rc geninfo_all_blocks=1 00:21:15.004 --rc geninfo_unexecuted_blocks=1 00:21:15.004 00:21:15.004 ' 00:21:15.004 10:08:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:15.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.004 --rc genhtml_branch_coverage=1 00:21:15.004 --rc genhtml_function_coverage=1 00:21:15.004 --rc genhtml_legend=1 00:21:15.004 --rc geninfo_all_blocks=1 00:21:15.005 --rc geninfo_unexecuted_blocks=1 00:21:15.005 00:21:15.005 ' 00:21:15.005 10:08:13 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:15.005 10:08:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.005 10:08:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.005 10:08:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.005 10:08:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.005 10:08:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.005 10:08:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.005 10:08:13 -- paths/export.sh@5 -- # export PATH 00:21:15.005 10:08:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.005 10:08:13 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:15.005 10:08:13 -- nvmf/common.sh@7 -- # uname -s 00:21:15.005 10:08:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.005 10:08:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.005 10:08:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.005 10:08:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.005 10:08:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.005 10:08:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.005 10:08:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.005 10:08:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.005 10:08:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.005 10:08:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.005 10:08:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:21:15.005 10:08:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:21:15.005 10:08:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.005 10:08:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.005 10:08:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:15.005 10:08:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:15.005 10:08:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.005 10:08:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.005 10:08:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.005 10:08:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.005 10:08:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.005 10:08:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.005 10:08:13 -- paths/export.sh@5 -- # export PATH 00:21:15.005 10:08:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.005 10:08:13 -- nvmf/common.sh@46 -- # : 0 00:21:15.005 10:08:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:15.005 10:08:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:15.005 10:08:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:15.005 10:08:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.005 10:08:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.005 10:08:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:15.005 10:08:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:15.005 10:08:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:15.005 10:08:13 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:15.005 10:08:13 -- host/fio.sh@14 -- # nvmftestinit 00:21:15.005 10:08:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:15.005 10:08:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.005 10:08:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:15.005 10:08:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:15.005 10:08:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:15.005 10:08:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.005 10:08:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.005 10:08:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.005 10:08:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:15.005 10:08:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:15.005 10:08:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:15.005 10:08:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:15.005 10:08:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:15.005 10:08:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:15.005 10:08:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.005 10:08:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.005 10:08:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:15.005 10:08:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:15.005 10:08:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:15.005 10:08:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:15.005 10:08:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:15.005 10:08:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.005 10:08:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:15.005 10:08:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:15.005 10:08:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:15.005 10:08:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:15.005 10:08:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:15.005 10:08:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:15.005 Cannot find device "nvmf_tgt_br" 00:21:15.005 10:08:13 -- nvmf/common.sh@154 -- # true 00:21:15.005 10:08:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:15.005 Cannot find device "nvmf_tgt_br2" 00:21:15.005 10:08:13 -- nvmf/common.sh@155 -- # true 00:21:15.005 10:08:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:15.264 10:08:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:15.264 Cannot find device "nvmf_tgt_br" 00:21:15.264 10:08:13 -- nvmf/common.sh@157 -- # true 00:21:15.264 10:08:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:15.264 Cannot find device "nvmf_tgt_br2" 00:21:15.264 10:08:13 -- nvmf/common.sh@158 -- # true 00:21:15.264 10:08:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:15.264 10:08:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:15.264 10:08:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:15.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:15.264 10:08:13 -- nvmf/common.sh@161 -- # true 00:21:15.264 10:08:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:15.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:15.264 10:08:13 -- nvmf/common.sh@162 -- # true 00:21:15.264 10:08:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:15.264 10:08:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:15.264 10:08:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:15.264 10:08:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:15.264 10:08:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:15.264 10:08:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:15.264 10:08:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:15.264 10:08:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:15.264 10:08:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:15.264 10:08:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:15.264 10:08:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:15.264 10:08:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:15.264 10:08:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:15.264 10:08:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:15.264 10:08:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:15.264 10:08:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:15.264 10:08:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:15.264 10:08:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:15.264 10:08:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:15.264 10:08:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:15.264 10:08:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:15.264 10:08:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:15.264 10:08:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:15.264 10:08:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:15.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:21:15.264 00:21:15.264 --- 10.0.0.2 ping statistics --- 00:21:15.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.264 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:15.264 10:08:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:15.264 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:15.264 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:21:15.264 00:21:15.264 --- 10.0.0.3 ping statistics --- 00:21:15.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.264 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:21:15.264 10:08:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:15.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:15.264 00:21:15.264 --- 10.0.0.1 ping statistics --- 00:21:15.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.264 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:15.264 10:08:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.264 10:08:13 -- nvmf/common.sh@421 -- # return 0 00:21:15.264 10:08:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:15.264 10:08:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.264 10:08:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:15.264 10:08:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:15.264 10:08:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.264 10:08:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:15.264 10:08:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:15.523 10:08:13 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:15.523 10:08:13 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:15.523 10:08:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:15.523 10:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:15.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.523 10:08:13 -- host/fio.sh@24 -- # nvmfpid=94698 00:21:15.523 10:08:13 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:15.523 10:08:13 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:15.523 10:08:13 -- host/fio.sh@28 -- # waitforlisten 94698 00:21:15.523 10:08:13 -- common/autotest_common.sh@829 -- # '[' -z 94698 ']' 00:21:15.523 10:08:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.523 10:08:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.523 10:08:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.523 10:08:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.523 10:08:13 -- common/autotest_common.sh@10 -- # set +x 00:21:15.523 [2024-12-16 10:08:13.965313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:15.523 [2024-12-16 10:08:13.965598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.523 [2024-12-16 10:08:14.110045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.782 [2024-12-16 10:08:14.179978] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:15.782 [2024-12-16 10:08:14.180454] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.782 [2024-12-16 10:08:14.180611] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.782 [2024-12-16 10:08:14.180783] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.782 [2024-12-16 10:08:14.181067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.782 [2024-12-16 10:08:14.181211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.782 [2024-12-16 10:08:14.181287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.782 [2024-12-16 10:08:14.181289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.349 10:08:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.349 10:08:14 -- common/autotest_common.sh@862 -- # return 0 00:21:16.349 10:08:14 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:16.607 [2024-12-16 10:08:15.048579] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.607 10:08:15 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:16.607 10:08:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:16.607 10:08:15 -- common/autotest_common.sh@10 -- # set +x 00:21:16.607 10:08:15 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:16.865 Malloc1 00:21:16.866 10:08:15 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:17.124 10:08:15 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:17.382 10:08:15 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:17.641 [2024-12-16 10:08:16.145259] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.641 10:08:16 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:17.900 10:08:16 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:17.900 10:08:16 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:17.900 10:08:16 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:17.900 10:08:16 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:17.900 10:08:16 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:17.900 10:08:16 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:17.900 10:08:16 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:17.900 10:08:16 -- common/autotest_common.sh@1330 -- # shift 00:21:17.900 10:08:16 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:17.900 10:08:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:17.900 10:08:16 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:17.900 10:08:16 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:17.900 10:08:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:17.900 10:08:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:17.900 10:08:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:17.900 10:08:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:17.900 10:08:16 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:17.900 10:08:16 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:17.900 10:08:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:17.900 10:08:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:17.900 10:08:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:17.900 10:08:16 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:17.900 10:08:16 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:18.159 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:18.159 fio-3.35 00:21:18.159 Starting 1 thread 00:21:20.690 00:21:20.690 test: (groupid=0, jobs=1): err= 0: pid=94818: Mon Dec 16 10:08:18 2024 00:21:20.690 read: IOPS=10.4k, BW=40.5MiB/s (42.4MB/s)(81.2MiB/2006msec) 00:21:20.690 slat (nsec): min=1824, max=370920, avg=2328.49, stdev=3280.61 00:21:20.690 clat (usec): min=3256, max=11388, avg=6550.24, stdev=565.75 00:21:20.690 lat (usec): min=3295, max=11390, avg=6552.57, stdev=565.72 00:21:20.690 clat percentiles (usec): 00:21:20.690 | 1.00th=[ 5407], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6128], 00:21:20.690 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:21:20.690 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7308], 95.00th=[ 7504], 00:21:20.690 | 99.00th=[ 8029], 99.50th=[ 8291], 99.90th=[ 9372], 99.95th=[10683], 00:21:20.690 | 99.99th=[11338] 00:21:20.690 bw ( KiB/s): min=40264, max=42536, per=99.96%, avg=41424.00, stdev=1025.98, samples=4 00:21:20.690 iops : min=10066, max=10634, avg=10356.00, stdev=256.49, samples=4 00:21:20.690 write: IOPS=10.4k, BW=40.5MiB/s (42.5MB/s)(81.3MiB/2006msec); 0 zone resets 00:21:20.690 slat (nsec): min=1869, max=274484, avg=2396.04, stdev=2318.19 00:21:20.690 clat (usec): min=2486, max=10822, avg=5761.06, stdev=470.89 00:21:20.690 lat (usec): min=2499, max=10824, avg=5763.46, stdev=470.91 00:21:20.690 clat percentiles (usec): 00:21:20.690 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:21:20.690 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5735], 60.00th=[ 5866], 00:21:20.690 | 70.00th=[ 5932], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6521], 00:21:20.690 | 99.00th=[ 6980], 99.50th=[ 7308], 99.90th=[ 8717], 99.95th=[ 9241], 00:21:20.690 | 99.99th=[10421] 00:21:20.690 bw ( KiB/s): min=40704, max=42304, per=100.00%, avg=41490.00, stdev=834.62, samples=4 00:21:20.690 iops : min=10176, max=10576, avg=10372.50, stdev=208.66, samples=4 00:21:20.690 lat (msec) : 4=0.06%, 10=99.89%, 20=0.06% 00:21:20.690 cpu : usr=67.43%, sys=23.64%, ctx=11, majf=0, minf=5 00:21:20.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:20.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:20.690 issued rwts: total=20782,20802,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.690 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:20.690 00:21:20.690 Run status group 0 (all jobs): 00:21:20.690 READ: bw=40.5MiB/s (42.4MB/s), 40.5MiB/s-40.5MiB/s (42.4MB/s-42.4MB/s), io=81.2MiB (85.1MB), run=2006-2006msec 00:21:20.690 WRITE: bw=40.5MiB/s (42.5MB/s), 40.5MiB/s-40.5MiB/s (42.5MB/s-42.5MB/s), io=81.3MiB (85.2MB), run=2006-2006msec 00:21:20.690 10:08:18 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:20.690 10:08:18 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:20.690 10:08:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:20.690 10:08:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:20.690 10:08:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:20.690 10:08:18 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:20.690 10:08:18 -- common/autotest_common.sh@1330 -- # shift 00:21:20.690 10:08:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:20.690 10:08:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.690 10:08:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:20.690 10:08:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:20.690 10:08:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:20.690 10:08:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:20.690 10:08:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:20.690 10:08:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.690 10:08:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:20.690 10:08:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:20.690 10:08:18 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:20.690 10:08:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:20.690 10:08:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:20.690 10:08:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:20.690 10:08:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:20.690 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:20.690 fio-3.35 00:21:20.690 Starting 1 thread 00:21:23.222 00:21:23.222 test: (groupid=0, jobs=1): err= 0: pid=94871: Mon Dec 16 10:08:21 2024 00:21:23.222 read: IOPS=9098, BW=142MiB/s (149MB/s)(285MiB/2006msec) 00:21:23.222 slat (usec): min=2, max=153, avg= 3.50, stdev= 2.30 00:21:23.222 clat (usec): min=2028, max=16093, avg=8426.69, stdev=1871.33 00:21:23.222 lat (usec): min=2031, max=16096, avg=8430.20, stdev=1871.40 00:21:23.222 clat percentiles (usec): 00:21:23.222 | 1.00th=[ 4555], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6718], 00:21:23.222 | 30.00th=[ 7373], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 9110], 00:21:23.222 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10421], 95.00th=[11076], 00:21:23.222 | 99.00th=[12911], 99.50th=[13829], 99.90th=[15139], 99.95th=[15401], 00:21:23.222 | 99.99th=[15664] 00:21:23.222 bw ( KiB/s): min=63904, max=82304, per=49.46%, avg=72000.00, stdev=7870.40, samples=4 00:21:23.222 iops : min= 3994, max= 5144, avg=4500.00, stdev=491.90, samples=4 00:21:23.222 write: IOPS=5405, BW=84.5MiB/s (88.6MB/s)(147MiB/1735msec); 0 zone resets 00:21:23.222 slat (usec): min=31, max=288, avg=34.92, stdev= 7.44 00:21:23.222 clat (usec): min=4119, max=16431, avg=10083.72, stdev=1634.65 00:21:23.222 lat (usec): min=4151, max=16464, avg=10118.64, stdev=1634.83 00:21:23.222 clat percentiles (usec): 00:21:23.222 | 1.00th=[ 6849], 5.00th=[ 7635], 10.00th=[ 8160], 20.00th=[ 8717], 00:21:23.222 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10290], 00:21:23.222 | 70.00th=[10683], 80.00th=[11338], 90.00th=[12256], 95.00th=[12911], 00:21:23.222 | 99.00th=[14746], 99.50th=[15270], 99.90th=[15664], 99.95th=[15926], 00:21:23.222 | 99.99th=[16450] 00:21:23.222 bw ( KiB/s): min=67168, max=85568, per=86.75%, avg=75024.00, stdev=7920.01, samples=4 00:21:23.222 iops : min= 4198, max= 5348, avg=4689.00, stdev=495.00, samples=4 00:21:23.222 lat (msec) : 4=0.36%, 10=67.51%, 20=32.12% 00:21:23.222 cpu : usr=71.22%, sys=19.00%, ctx=4, majf=0, minf=1 00:21:23.222 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:21:23.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:23.222 issued rwts: total=18252,9378,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.222 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:23.222 00:21:23.222 Run status group 0 (all jobs): 00:21:23.222 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=285MiB (299MB), run=2006-2006msec 00:21:23.222 WRITE: bw=84.5MiB/s (88.6MB/s), 84.5MiB/s-84.5MiB/s (88.6MB/s-88.6MB/s), io=147MiB (154MB), run=1735-1735msec 00:21:23.222 10:08:21 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:23.222 10:08:21 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:23.222 10:08:21 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:23.222 10:08:21 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:23.222 10:08:21 -- common/autotest_common.sh@1508 -- # bdfs=() 00:21:23.222 10:08:21 -- common/autotest_common.sh@1508 -- # local bdfs 00:21:23.222 10:08:21 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:23.222 10:08:21 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:23.222 10:08:21 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:21:23.222 10:08:21 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:21:23.222 10:08:21 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:23.222 10:08:21 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:23.481 Nvme0n1 00:21:23.481 10:08:21 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:23.739 10:08:22 -- host/fio.sh@53 -- # ls_guid=9932fe93-4db9-4cb6-bf55-8ee6503433eb 00:21:23.739 10:08:22 -- host/fio.sh@54 -- # get_lvs_free_mb 9932fe93-4db9-4cb6-bf55-8ee6503433eb 00:21:23.739 10:08:22 -- common/autotest_common.sh@1353 -- # local lvs_uuid=9932fe93-4db9-4cb6-bf55-8ee6503433eb 00:21:23.739 10:08:22 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:23.739 10:08:22 -- common/autotest_common.sh@1355 -- # local fc 00:21:23.739 10:08:22 -- common/autotest_common.sh@1356 -- # local cs 00:21:23.739 10:08:22 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:24.002 10:08:22 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:24.002 { 00:21:24.002 "base_bdev": "Nvme0n1", 00:21:24.002 "block_size": 4096, 00:21:24.002 "cluster_size": 1073741824, 00:21:24.002 "free_clusters": 4, 00:21:24.002 "name": "lvs_0", 00:21:24.002 "total_data_clusters": 4, 00:21:24.002 "uuid": "9932fe93-4db9-4cb6-bf55-8ee6503433eb" 00:21:24.002 } 00:21:24.002 ]' 00:21:24.002 10:08:22 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="9932fe93-4db9-4cb6-bf55-8ee6503433eb") .free_clusters' 00:21:24.002 10:08:22 -- common/autotest_common.sh@1358 -- # fc=4 00:21:24.002 10:08:22 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="9932fe93-4db9-4cb6-bf55-8ee6503433eb") .cluster_size' 00:21:24.002 4096 00:21:24.002 10:08:22 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:21:24.002 10:08:22 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:21:24.002 10:08:22 -- common/autotest_common.sh@1363 -- # echo 4096 00:21:24.002 10:08:22 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:24.288 9d5866e2-06db-4a82-b494-e28060094133 00:21:24.288 10:08:22 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:24.555 10:08:23 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:24.814 10:08:23 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:25.072 10:08:23 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:25.072 10:08:23 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:25.072 10:08:23 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:25.072 10:08:23 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:25.072 10:08:23 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:25.072 10:08:23 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:25.072 10:08:23 -- common/autotest_common.sh@1330 -- # shift 00:21:25.072 10:08:23 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:25.072 10:08:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:25.072 10:08:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:25.072 10:08:23 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:25.072 10:08:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:25.072 10:08:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:25.072 10:08:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:25.072 10:08:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:25.072 10:08:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:25.072 10:08:23 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:25.072 10:08:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:25.072 10:08:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:25.072 10:08:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:25.073 10:08:23 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:25.073 10:08:23 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:25.073 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:25.073 fio-3.35 00:21:25.073 Starting 1 thread 00:21:27.605 00:21:27.605 test: (groupid=0, jobs=1): err= 0: pid=95018: Mon Dec 16 10:08:25 2024 00:21:27.605 read: IOPS=6374, BW=24.9MiB/s (26.1MB/s)(50.0MiB/2008msec) 00:21:27.605 slat (nsec): min=1767, max=288716, avg=2698.10, stdev=4003.80 00:21:27.605 clat (usec): min=3847, max=18373, avg=10593.01, stdev=1103.33 00:21:27.605 lat (usec): min=3855, max=18376, avg=10595.71, stdev=1103.15 00:21:27.605 clat percentiles (usec): 00:21:27.605 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:21:27.605 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:21:27.605 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11863], 95.00th=[12387], 00:21:27.605 | 99.00th=[13435], 99.50th=[13829], 99.90th=[17433], 99.95th=[17695], 00:21:27.605 | 99.99th=[18220] 00:21:27.605 bw ( KiB/s): min=24720, max=26944, per=99.89%, avg=25468.00, stdev=1012.72, samples=4 00:21:27.605 iops : min= 6180, max= 6736, avg=6367.00, stdev=253.18, samples=4 00:21:27.605 write: IOPS=6374, BW=24.9MiB/s (26.1MB/s)(50.0MiB/2008msec); 0 zone resets 00:21:27.605 slat (nsec): min=1834, max=230613, avg=2747.22, stdev=3120.93 00:21:27.605 clat (usec): min=2091, max=18142, avg=9394.06, stdev=929.56 00:21:27.605 lat (usec): min=2102, max=18144, avg=9396.80, stdev=929.46 00:21:27.605 clat percentiles (usec): 00:21:27.605 | 1.00th=[ 7373], 5.00th=[ 8029], 10.00th=[ 8291], 20.00th=[ 8717], 00:21:27.605 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:21:27.605 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:21:27.605 | 99.00th=[11469], 99.50th=[11863], 99.90th=[15401], 99.95th=[16450], 00:21:27.605 | 99.99th=[17433] 00:21:27.605 bw ( KiB/s): min=24384, max=26440, per=99.91%, avg=25474.00, stdev=964.80, samples=4 00:21:27.605 iops : min= 6096, max= 6610, avg=6368.50, stdev=241.20, samples=4 00:21:27.605 lat (msec) : 4=0.05%, 10=52.23%, 20=47.71% 00:21:27.605 cpu : usr=71.50%, sys=22.02%, ctx=4, majf=0, minf=5 00:21:27.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:27.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:27.605 issued rwts: total=12799,12799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:27.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:27.605 00:21:27.605 Run status group 0 (all jobs): 00:21:27.605 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.0MiB (52.4MB), run=2008-2008msec 00:21:27.605 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.0MiB (52.4MB), run=2008-2008msec 00:21:27.605 10:08:26 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:27.864 10:08:26 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:28.123 10:08:26 -- host/fio.sh@64 -- # ls_nested_guid=65614f60-91ff-4a0b-b13f-3d92752ef576 00:21:28.123 10:08:26 -- host/fio.sh@65 -- # get_lvs_free_mb 65614f60-91ff-4a0b-b13f-3d92752ef576 00:21:28.123 10:08:26 -- common/autotest_common.sh@1353 -- # local lvs_uuid=65614f60-91ff-4a0b-b13f-3d92752ef576 00:21:28.123 10:08:26 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:28.123 10:08:26 -- common/autotest_common.sh@1355 -- # local fc 00:21:28.123 10:08:26 -- common/autotest_common.sh@1356 -- # local cs 00:21:28.123 10:08:26 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:28.382 10:08:26 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:28.382 { 00:21:28.382 "base_bdev": "Nvme0n1", 00:21:28.382 "block_size": 4096, 00:21:28.382 "cluster_size": 1073741824, 00:21:28.382 "free_clusters": 0, 00:21:28.382 "name": "lvs_0", 00:21:28.382 "total_data_clusters": 4, 00:21:28.382 "uuid": "9932fe93-4db9-4cb6-bf55-8ee6503433eb" 00:21:28.382 }, 00:21:28.382 { 00:21:28.382 "base_bdev": "9d5866e2-06db-4a82-b494-e28060094133", 00:21:28.382 "block_size": 4096, 00:21:28.382 "cluster_size": 4194304, 00:21:28.382 "free_clusters": 1022, 00:21:28.382 "name": "lvs_n_0", 00:21:28.382 "total_data_clusters": 1022, 00:21:28.382 "uuid": "65614f60-91ff-4a0b-b13f-3d92752ef576" 00:21:28.382 } 00:21:28.382 ]' 00:21:28.382 10:08:26 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="65614f60-91ff-4a0b-b13f-3d92752ef576") .free_clusters' 00:21:28.382 10:08:26 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:28.382 10:08:26 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="65614f60-91ff-4a0b-b13f-3d92752ef576") .cluster_size' 00:21:28.382 4088 00:21:28.382 10:08:26 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:28.382 10:08:26 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:28.382 10:08:26 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:28.382 10:08:26 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:28.640 486a0de6-705f-4ed1-961e-35fc8e97d90a 00:21:28.640 10:08:27 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:28.899 10:08:27 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:29.158 10:08:27 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:29.417 10:08:27 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:29.417 10:08:27 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:29.417 10:08:27 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:29.417 10:08:27 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:29.417 10:08:27 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:29.417 10:08:27 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:29.417 10:08:27 -- common/autotest_common.sh@1330 -- # shift 00:21:29.417 10:08:27 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:29.417 10:08:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:29.417 10:08:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:29.417 10:08:27 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:29.417 10:08:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:29.417 10:08:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:29.417 10:08:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:29.417 10:08:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:29.417 10:08:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:29.417 10:08:27 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:29.417 10:08:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:29.417 10:08:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:29.417 10:08:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:29.417 10:08:27 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:29.417 10:08:27 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:29.676 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:29.676 fio-3.35 00:21:29.676 Starting 1 thread 00:21:32.209 00:21:32.209 test: (groupid=0, jobs=1): err= 0: pid=95145: Mon Dec 16 10:08:30 2024 00:21:32.209 read: IOPS=5575, BW=21.8MiB/s (22.8MB/s)(43.8MiB/2010msec) 00:21:32.209 slat (nsec): min=1872, max=354031, avg=3000.99, stdev=5149.58 00:21:32.209 clat (usec): min=4924, max=21480, avg=12242.27, stdev=1173.08 00:21:32.209 lat (usec): min=4934, max=21482, avg=12245.27, stdev=1172.87 00:21:32.209 clat percentiles (usec): 00:21:32.209 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[10814], 20.00th=[11338], 00:21:32.209 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12518], 00:21:32.209 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13698], 95.00th=[14222], 00:21:32.209 | 99.00th=[15008], 99.50th=[15533], 99.90th=[18482], 99.95th=[18744], 00:21:32.209 | 99.99th=[20317] 00:21:32.210 bw ( KiB/s): min=21040, max=22904, per=99.88%, avg=22274.00, stdev=837.30, samples=4 00:21:32.210 iops : min= 5260, max= 5726, avg=5568.50, stdev=209.33, samples=4 00:21:32.210 write: IOPS=5542, BW=21.6MiB/s (22.7MB/s)(43.5MiB/2010msec); 0 zone resets 00:21:32.210 slat (nsec): min=1906, max=300218, avg=3130.61, stdev=4150.66 00:21:32.210 clat (usec): min=2625, max=19007, avg=10693.94, stdev=1019.47 00:21:32.210 lat (usec): min=2638, max=19010, avg=10697.07, stdev=1019.37 00:21:32.210 clat percentiles (usec): 00:21:32.210 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896], 00:21:32.210 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:21:32.210 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:21:32.210 | 99.00th=[12911], 99.50th=[13304], 99.90th=[17695], 99.95th=[18744], 00:21:32.210 | 99.99th=[19006] 00:21:32.210 bw ( KiB/s): min=21896, max=22560, per=99.98%, avg=22164.00, stdev=303.33, samples=4 00:21:32.210 iops : min= 5474, max= 5640, avg=5541.00, stdev=75.83, samples=4 00:21:32.210 lat (msec) : 4=0.03%, 10=12.56%, 20=87.40%, 50=0.01% 00:21:32.210 cpu : usr=72.27%, sys=21.25%, ctx=8, majf=0, minf=5 00:21:32.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:32.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:32.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:32.210 issued rwts: total=11206,11140,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:32.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:32.210 00:21:32.210 Run status group 0 (all jobs): 00:21:32.210 READ: bw=21.8MiB/s (22.8MB/s), 21.8MiB/s-21.8MiB/s (22.8MB/s-22.8MB/s), io=43.8MiB (45.9MB), run=2010-2010msec 00:21:32.210 WRITE: bw=21.6MiB/s (22.7MB/s), 21.6MiB/s-21.6MiB/s (22.7MB/s-22.7MB/s), io=43.5MiB (45.6MB), run=2010-2010msec 00:21:32.210 10:08:30 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:32.210 10:08:30 -- host/fio.sh@74 -- # sync 00:21:32.210 10:08:30 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:32.468 10:08:31 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:32.726 10:08:31 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:32.990 10:08:31 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:33.249 10:08:31 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:33.508 10:08:31 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:33.508 10:08:31 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:33.508 10:08:31 -- host/fio.sh@86 -- # nvmftestfini 00:21:33.508 10:08:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:33.508 10:08:31 -- nvmf/common.sh@116 -- # sync 00:21:33.508 10:08:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:33.508 10:08:31 -- nvmf/common.sh@119 -- # set +e 00:21:33.508 10:08:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:33.508 10:08:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:33.508 rmmod nvme_tcp 00:21:33.508 rmmod nvme_fabrics 00:21:33.508 rmmod nvme_keyring 00:21:33.508 10:08:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:33.508 10:08:32 -- nvmf/common.sh@123 -- # set -e 00:21:33.508 10:08:32 -- nvmf/common.sh@124 -- # return 0 00:21:33.508 10:08:32 -- nvmf/common.sh@477 -- # '[' -n 94698 ']' 00:21:33.508 10:08:32 -- nvmf/common.sh@478 -- # killprocess 94698 00:21:33.508 10:08:32 -- common/autotest_common.sh@936 -- # '[' -z 94698 ']' 00:21:33.508 10:08:32 -- common/autotest_common.sh@940 -- # kill -0 94698 00:21:33.508 10:08:32 -- common/autotest_common.sh@941 -- # uname 00:21:33.508 10:08:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:33.508 10:08:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94698 00:21:33.508 killing process with pid 94698 00:21:33.508 10:08:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:33.508 10:08:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:33.508 10:08:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94698' 00:21:33.508 10:08:32 -- common/autotest_common.sh@955 -- # kill 94698 00:21:33.508 10:08:32 -- common/autotest_common.sh@960 -- # wait 94698 00:21:33.767 10:08:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:33.767 10:08:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:33.767 10:08:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:33.767 10:08:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.767 10:08:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:33.767 10:08:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.767 10:08:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.767 10:08:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.767 10:08:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:33.767 00:21:33.767 real 0m18.954s 00:21:33.767 user 1m23.482s 00:21:33.767 sys 0m4.386s 00:21:33.767 10:08:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:33.767 ************************************ 00:21:33.767 END TEST nvmf_fio_host 00:21:33.767 ************************************ 00:21:33.767 10:08:32 -- common/autotest_common.sh@10 -- # set +x 00:21:33.767 10:08:32 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:33.767 10:08:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:33.767 10:08:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:33.767 10:08:32 -- common/autotest_common.sh@10 -- # set +x 00:21:33.767 ************************************ 00:21:33.767 START TEST nvmf_failover 00:21:33.767 ************************************ 00:21:33.767 10:08:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:34.027 * Looking for test storage... 00:21:34.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:34.027 10:08:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:34.027 10:08:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:34.027 10:08:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:34.027 10:08:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:34.027 10:08:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:34.027 10:08:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:34.027 10:08:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:34.027 10:08:32 -- scripts/common.sh@335 -- # IFS=.-: 00:21:34.027 10:08:32 -- scripts/common.sh@335 -- # read -ra ver1 00:21:34.027 10:08:32 -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.027 10:08:32 -- scripts/common.sh@336 -- # read -ra ver2 00:21:34.027 10:08:32 -- scripts/common.sh@337 -- # local 'op=<' 00:21:34.027 10:08:32 -- scripts/common.sh@339 -- # ver1_l=2 00:21:34.027 10:08:32 -- scripts/common.sh@340 -- # ver2_l=1 00:21:34.027 10:08:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:34.027 10:08:32 -- scripts/common.sh@343 -- # case "$op" in 00:21:34.027 10:08:32 -- scripts/common.sh@344 -- # : 1 00:21:34.027 10:08:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:34.027 10:08:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.027 10:08:32 -- scripts/common.sh@364 -- # decimal 1 00:21:34.027 10:08:32 -- scripts/common.sh@352 -- # local d=1 00:21:34.027 10:08:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.027 10:08:32 -- scripts/common.sh@354 -- # echo 1 00:21:34.027 10:08:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:34.027 10:08:32 -- scripts/common.sh@365 -- # decimal 2 00:21:34.027 10:08:32 -- scripts/common.sh@352 -- # local d=2 00:21:34.027 10:08:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.027 10:08:32 -- scripts/common.sh@354 -- # echo 2 00:21:34.027 10:08:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:34.027 10:08:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:34.027 10:08:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:34.027 10:08:32 -- scripts/common.sh@367 -- # return 0 00:21:34.027 10:08:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.027 10:08:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:34.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.027 --rc genhtml_branch_coverage=1 00:21:34.027 --rc genhtml_function_coverage=1 00:21:34.027 --rc genhtml_legend=1 00:21:34.027 --rc geninfo_all_blocks=1 00:21:34.027 --rc geninfo_unexecuted_blocks=1 00:21:34.027 00:21:34.027 ' 00:21:34.027 10:08:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:34.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.027 --rc genhtml_branch_coverage=1 00:21:34.027 --rc genhtml_function_coverage=1 00:21:34.027 --rc genhtml_legend=1 00:21:34.027 --rc geninfo_all_blocks=1 00:21:34.027 --rc geninfo_unexecuted_blocks=1 00:21:34.027 00:21:34.027 ' 00:21:34.027 10:08:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:34.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.027 --rc genhtml_branch_coverage=1 00:21:34.027 --rc genhtml_function_coverage=1 00:21:34.027 --rc genhtml_legend=1 00:21:34.027 --rc geninfo_all_blocks=1 00:21:34.027 --rc geninfo_unexecuted_blocks=1 00:21:34.027 00:21:34.027 ' 00:21:34.027 10:08:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:34.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.027 --rc genhtml_branch_coverage=1 00:21:34.027 --rc genhtml_function_coverage=1 00:21:34.027 --rc genhtml_legend=1 00:21:34.027 --rc geninfo_all_blocks=1 00:21:34.027 --rc geninfo_unexecuted_blocks=1 00:21:34.027 00:21:34.027 ' 00:21:34.027 10:08:32 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:34.027 10:08:32 -- nvmf/common.sh@7 -- # uname -s 00:21:34.027 10:08:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.027 10:08:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.027 10:08:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.027 10:08:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.027 10:08:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.027 10:08:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.027 10:08:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.027 10:08:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.027 10:08:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.027 10:08:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.027 10:08:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:21:34.027 10:08:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:21:34.027 10:08:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.027 10:08:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.027 10:08:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:34.027 10:08:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:34.027 10:08:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.027 10:08:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.027 10:08:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.027 10:08:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.027 10:08:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.027 10:08:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.027 10:08:32 -- paths/export.sh@5 -- # export PATH 00:21:34.027 10:08:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.027 10:08:32 -- nvmf/common.sh@46 -- # : 0 00:21:34.027 10:08:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:34.027 10:08:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:34.027 10:08:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:34.027 10:08:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.027 10:08:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.027 10:08:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:34.027 10:08:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:34.027 10:08:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:34.027 10:08:32 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:34.027 10:08:32 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:34.027 10:08:32 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:34.027 10:08:32 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.027 10:08:32 -- host/failover.sh@18 -- # nvmftestinit 00:21:34.027 10:08:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:34.027 10:08:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.027 10:08:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:34.027 10:08:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:34.027 10:08:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:34.027 10:08:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.027 10:08:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.027 10:08:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.027 10:08:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:34.027 10:08:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:34.027 10:08:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:34.027 10:08:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:34.027 10:08:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:34.027 10:08:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:34.027 10:08:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.027 10:08:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.028 10:08:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:34.028 10:08:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:34.028 10:08:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:34.028 10:08:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:34.028 10:08:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:34.028 10:08:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.028 10:08:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:34.028 10:08:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:34.028 10:08:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:34.028 10:08:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:34.028 10:08:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:34.028 10:08:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:34.028 Cannot find device "nvmf_tgt_br" 00:21:34.028 10:08:32 -- nvmf/common.sh@154 -- # true 00:21:34.028 10:08:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:34.028 Cannot find device "nvmf_tgt_br2" 00:21:34.028 10:08:32 -- nvmf/common.sh@155 -- # true 00:21:34.028 10:08:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:34.028 10:08:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:34.028 Cannot find device "nvmf_tgt_br" 00:21:34.028 10:08:32 -- nvmf/common.sh@157 -- # true 00:21:34.028 10:08:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:34.028 Cannot find device "nvmf_tgt_br2" 00:21:34.028 10:08:32 -- nvmf/common.sh@158 -- # true 00:21:34.028 10:08:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:34.287 10:08:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:34.287 10:08:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:34.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:34.287 10:08:32 -- nvmf/common.sh@161 -- # true 00:21:34.287 10:08:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:34.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:34.287 10:08:32 -- nvmf/common.sh@162 -- # true 00:21:34.287 10:08:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:34.287 10:08:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:34.287 10:08:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:34.287 10:08:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:34.287 10:08:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:34.287 10:08:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:34.287 10:08:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:34.287 10:08:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:34.287 10:08:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:34.287 10:08:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:34.287 10:08:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:34.287 10:08:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:34.287 10:08:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:34.287 10:08:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:34.287 10:08:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:34.287 10:08:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:34.287 10:08:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:34.287 10:08:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:34.287 10:08:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:34.287 10:08:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:34.287 10:08:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:34.287 10:08:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:34.287 10:08:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:34.287 10:08:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:34.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:21:34.287 00:21:34.287 --- 10.0.0.2 ping statistics --- 00:21:34.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.287 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:21:34.287 10:08:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:34.287 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:34.287 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:21:34.287 00:21:34.287 --- 10.0.0.3 ping statistics --- 00:21:34.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.287 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:34.287 10:08:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:34.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:34.287 00:21:34.287 --- 10.0.0.1 ping statistics --- 00:21:34.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.287 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:34.287 10:08:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.287 10:08:32 -- nvmf/common.sh@421 -- # return 0 00:21:34.287 10:08:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:34.287 10:08:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.287 10:08:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:34.287 10:08:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:34.287 10:08:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.287 10:08:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:34.287 10:08:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:34.287 10:08:32 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:34.287 10:08:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:34.287 10:08:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:34.287 10:08:32 -- common/autotest_common.sh@10 -- # set +x 00:21:34.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.287 10:08:32 -- nvmf/common.sh@469 -- # nvmfpid=95420 00:21:34.287 10:08:32 -- nvmf/common.sh@470 -- # waitforlisten 95420 00:21:34.287 10:08:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:34.287 10:08:32 -- common/autotest_common.sh@829 -- # '[' -z 95420 ']' 00:21:34.287 10:08:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.287 10:08:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.287 10:08:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.287 10:08:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.287 10:08:32 -- common/autotest_common.sh@10 -- # set +x 00:21:34.546 [2024-12-16 10:08:32.956975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:34.546 [2024-12-16 10:08:32.957253] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.546 [2024-12-16 10:08:33.097924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:34.546 [2024-12-16 10:08:33.160610] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:34.546 [2024-12-16 10:08:33.161035] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.546 [2024-12-16 10:08:33.161086] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.546 [2024-12-16 10:08:33.161212] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.546 [2024-12-16 10:08:33.161423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.546 [2024-12-16 10:08:33.161950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:34.546 [2024-12-16 10:08:33.162006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.479 10:08:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.479 10:08:33 -- common/autotest_common.sh@862 -- # return 0 00:21:35.479 10:08:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:35.479 10:08:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:35.479 10:08:33 -- common/autotest_common.sh@10 -- # set +x 00:21:35.479 10:08:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.479 10:08:33 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:35.737 [2024-12-16 10:08:34.206830] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.737 10:08:34 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:35.996 Malloc0 00:21:35.996 10:08:34 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:36.253 10:08:34 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:36.511 10:08:34 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:36.511 [2024-12-16 10:08:35.097111] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.511 10:08:35 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:36.769 [2024-12-16 10:08:35.309240] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:36.769 10:08:35 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:37.028 [2024-12-16 10:08:35.517420] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:37.028 10:08:35 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:37.028 10:08:35 -- host/failover.sh@31 -- # bdevperf_pid=95532 00:21:37.028 10:08:35 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:37.028 10:08:35 -- host/failover.sh@34 -- # waitforlisten 95532 /var/tmp/bdevperf.sock 00:21:37.028 10:08:35 -- common/autotest_common.sh@829 -- # '[' -z 95532 ']' 00:21:37.028 10:08:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.028 10:08:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:37.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.028 10:08:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.028 10:08:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:37.028 10:08:35 -- common/autotest_common.sh@10 -- # set +x 00:21:38.403 10:08:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.403 10:08:36 -- common/autotest_common.sh@862 -- # return 0 00:21:38.403 10:08:36 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.403 NVMe0n1 00:21:38.403 10:08:36 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.662 00:21:38.662 10:08:37 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:38.662 10:08:37 -- host/failover.sh@39 -- # run_test_pid=95578 00:21:38.662 10:08:37 -- host/failover.sh@41 -- # sleep 1 00:21:39.598 10:08:38 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.857 [2024-12-16 10:08:38.438215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438275] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438287] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438315] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438323] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438348] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438379] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438405] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438438] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438454] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438462] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438517] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438557] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438576] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438584] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438633] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438664] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.857 [2024-12-16 10:08:38.438672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.858 [2024-12-16 10:08:38.438682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.858 [2024-12-16 10:08:38.438690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.858 [2024-12-16 10:08:38.438698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.858 [2024-12-16 10:08:38.438706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.858 [2024-12-16 10:08:38.438715] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.858 [2024-12-16 10:08:38.438722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.858 [2024-12-16 10:08:38.438732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.858 [2024-12-16 10:08:38.438739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.858 [2024-12-16 10:08:38.438747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.858 [2024-12-16 10:08:38.438755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.858 [2024-12-16 10:08:38.438763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.858 [2024-12-16 10:08:38.438771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64c90 is same with the state(5) to be set 00:21:39.858 10:08:38 -- host/failover.sh@45 -- # sleep 3 00:21:43.250 10:08:41 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:43.250 00:21:43.250 10:08:41 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:43.509 [2024-12-16 10:08:42.006586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006651] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006695] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006757] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006764] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006818] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.509 [2024-12-16 10:08:42.006854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.510 [2024-12-16 10:08:42.006861] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.510 [2024-12-16 10:08:42.006869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.510 [2024-12-16 10:08:42.006876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.510 [2024-12-16 10:08:42.006883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.510 [2024-12-16 10:08:42.006906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.510 [2024-12-16 10:08:42.006938] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.510 [2024-12-16 10:08:42.006946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.510 [2024-12-16 10:08:42.006955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.510 [2024-12-16 10:08:42.006963] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.510 [2024-12-16 10:08:42.006971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.510 [2024-12-16 10:08:42.006978] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.510 [2024-12-16 10:08:42.006987] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66380 is same with the state(5) to be set 00:21:43.510 10:08:42 -- host/failover.sh@50 -- # sleep 3 00:21:46.830 10:08:45 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.830 [2024-12-16 10:08:45.288704] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.830 10:08:45 -- host/failover.sh@55 -- # sleep 1 00:21:47.766 10:08:46 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:48.025 [2024-12-16 10:08:46.563439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563515] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563559] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563617] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563626] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 [2024-12-16 10:08:46.563668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66a60 is same with the state(5) to be set 00:21:48.025 10:08:46 -- host/failover.sh@59 -- # wait 95578 00:21:54.596 0 00:21:54.596 10:08:52 -- host/failover.sh@61 -- # killprocess 95532 00:21:54.596 10:08:52 -- common/autotest_common.sh@936 -- # '[' -z 95532 ']' 00:21:54.596 10:08:52 -- common/autotest_common.sh@940 -- # kill -0 95532 00:21:54.596 10:08:52 -- common/autotest_common.sh@941 -- # uname 00:21:54.596 10:08:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:54.596 10:08:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95532 00:21:54.596 killing process with pid 95532 00:21:54.596 10:08:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:54.596 10:08:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:54.596 10:08:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95532' 00:21:54.596 10:08:52 -- common/autotest_common.sh@955 -- # kill 95532 00:21:54.596 10:08:52 -- common/autotest_common.sh@960 -- # wait 95532 00:21:54.596 10:08:52 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:54.596 [2024-12-16 10:08:35.573804] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:54.596 [2024-12-16 10:08:35.573896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95532 ] 00:21:54.596 [2024-12-16 10:08:35.712883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.596 [2024-12-16 10:08:35.778111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.596 Running I/O for 15 seconds... 00:21:54.596 [2024-12-16 10:08:38.439023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.596 [2024-12-16 10:08:38.439068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.596 [2024-12-16 10:08:38.439096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.596 [2024-12-16 10:08:38.439112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.596 [2024-12-16 10:08:38.439129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.596 [2024-12-16 10:08:38.439144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.596 [2024-12-16 10:08:38.439160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.596 [2024-12-16 10:08:38.439174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.596 [2024-12-16 10:08:38.439189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.596 [2024-12-16 10:08:38.439204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.596 [2024-12-16 10:08:38.439221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.596 [2024-12-16 10:08:38.439235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.596 [2024-12-16 10:08:38.439251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.596 [2024-12-16 10:08:38.439264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.596 [2024-12-16 10:08:38.439279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.596 [2024-12-16 10:08:38.439293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.596 [2024-12-16 10:08:38.439309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.596 [2024-12-16 10:08:38.439322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.596 [2024-12-16 10:08:38.439353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.596 [2024-12-16 10:08:38.439395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.596 [2024-12-16 10:08:38.439415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.596 [2024-12-16 10:08:38.439429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.596 [2024-12-16 10:08:38.439468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.596 [2024-12-16 10:08:38.439484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.596 [2024-12-16 10:08:38.439499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.596 [2024-12-16 10:08:38.439513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.596 [2024-12-16 10:08:38.439528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.596 [2024-12-16 10:08:38.439541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.439557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.439570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.439585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.439599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.439614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.439634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.439650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.439665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.439681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.439710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.439726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.439739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.439769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.439781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.439796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.439808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.439823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.439835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.439850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.439871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.439888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.439900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.439915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.439927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.439942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.439955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.439969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.439983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.439997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.597 [2024-12-16 10:08:38.440521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.597 [2024-12-16 10:08:38.440642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.597 [2024-12-16 10:08:38.440681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.597 [2024-12-16 10:08:38.440783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.597 [2024-12-16 10:08:38.440825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.597 [2024-12-16 10:08:38.440840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.597 [2024-12-16 10:08:38.440853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.440867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.440880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.440894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.440907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.440922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.440940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.440954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.440967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.440982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.440995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.441021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.441055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.441083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.441110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.441224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.441251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.441279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.441333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.441726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.441770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.441812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.441839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.441927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.441959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.441973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.441986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.442001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.442014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.442090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.442106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.442122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.598 [2024-12-16 10:08:38.442136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.442151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.598 [2024-12-16 10:08:38.442165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.598 [2024-12-16 10:08:38.442181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.599 [2024-12-16 10:08:38.442194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.599 [2024-12-16 10:08:38.442286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.599 [2024-12-16 10:08:38.442322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.599 [2024-12-16 10:08:38.442397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.599 [2024-12-16 10:08:38.442782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.599 [2024-12-16 10:08:38.442811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.599 [2024-12-16 10:08:38.442840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.599 [2024-12-16 10:08:38.442926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.442976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.442990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.443005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.443023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.443039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.443052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.443068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.443081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.443096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.443109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.443131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.443146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.443162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.599 [2024-12-16 10:08:38.443175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.443190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6f130 is same with the state(5) to be set 00:21:54.599 [2024-12-16 10:08:38.443207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:54.599 [2024-12-16 10:08:38.443217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:54.599 [2024-12-16 10:08:38.443233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7120 len:8 PRP1 0x0 PRP2 0x0 00:21:54.599 [2024-12-16 10:08:38.443247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.443319] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e6f130 was disconnected and freed. reset controller. 00:21:54.599 [2024-12-16 10:08:38.443336] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:54.599 [2024-12-16 10:08:38.443417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.599 [2024-12-16 10:08:38.443441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.443457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.599 [2024-12-16 10:08:38.443470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.443484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.599 [2024-12-16 10:08:38.443498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.443512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.599 [2024-12-16 10:08:38.443525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.599 [2024-12-16 10:08:38.443539] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:54.599 [2024-12-16 10:08:38.443593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deacb0 (9): Bad file descriptor 00:21:54.599 [2024-12-16 10:08:38.446077] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:54.599 [2024-12-16 10:08:38.475105] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:54.599 [2024-12-16 10:08:42.005904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.600 [2024-12-16 10:08:42.005964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.006000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.600 [2024-12-16 10:08:42.006013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.006085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.600 [2024-12-16 10:08:42.006100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.006114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.600 [2024-12-16 10:08:42.006127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.006140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deacb0 is same with the state(5) to be set 00:21:54.600 [2024-12-16 10:08:42.007075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.007979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.007992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.008006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.008019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.008033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.008051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.008065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.600 [2024-12-16 10:08:42.008078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.600 [2024-12-16 10:08:42.008092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.601 [2024-12-16 10:08:42.008209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.601 [2024-12-16 10:08:42.008298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.601 [2024-12-16 10:08:42.008330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.601 [2024-12-16 10:08:42.008932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.601 [2024-12-16 10:08:42.008958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.008985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.008999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.009011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.009025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.009044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.009059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.601 [2024-12-16 10:08:42.009071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.009085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.009098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.009112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.009124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.009139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.009156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.009184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.009206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.009220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.009233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.009255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.009278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.009292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.009305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.009319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.009331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.601 [2024-12-16 10:08:42.009345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.601 [2024-12-16 10:08:42.009372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.009401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.009427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.009474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.009501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.602 [2024-12-16 10:08:42.009528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.602 [2024-12-16 10:08:42.009558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.009584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.009622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.602 [2024-12-16 10:08:42.009649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.602 [2024-12-16 10:08:42.009680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.602 [2024-12-16 10:08:42.009723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.602 [2024-12-16 10:08:42.009751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.009778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.009806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.602 [2024-12-16 10:08:42.009859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.602 [2024-12-16 10:08:42.009890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.602 [2024-12-16 10:08:42.009918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.602 [2024-12-16 10:08:42.009965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.009980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.009994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.602 [2024-12-16 10:08:42.010274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.602 [2024-12-16 10:08:42.010413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.602 [2024-12-16 10:08:42.010471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.602 [2024-12-16 10:08:42.010750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.602 [2024-12-16 10:08:42.010771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.603 [2024-12-16 10:08:42.010785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.010800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:42.010813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.010828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:42.010846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.010875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:42.010888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.010902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:42.010914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.010928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:42.010948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.010962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:42.010975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.010989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:42.011002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.011016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:42.011028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.011052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:42.011065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.011089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:42.011101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.011141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:42.011153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.011167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:42.011196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.011210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:42.011222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.011247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:42.011259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.011272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:42.011284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.011297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e49b10 is same with the state(5) to be set 00:21:54.603 [2024-12-16 10:08:42.011312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:54.603 [2024-12-16 10:08:42.011321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:54.603 [2024-12-16 10:08:42.011331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45544 len:8 PRP1 0x0 PRP2 0x0 00:21:54.603 [2024-12-16 10:08:42.011343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:42.011396] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e49b10 was disconnected and freed. reset controller. 00:21:54.603 [2024-12-16 10:08:42.011412] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:54.603 [2024-12-16 10:08:42.011424] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:54.603 [2024-12-16 10:08:42.013834] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:54.603 [2024-12-16 10:08:42.013877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deacb0 (9): Bad file descriptor 00:21:54.603 [2024-12-16 10:08:42.047433] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:54.603 [2024-12-16 10:08:46.562840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.603 [2024-12-16 10:08:46.562914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.562950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.603 [2024-12-16 10:08:46.562963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.562975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.603 [2024-12-16 10:08:46.562986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.562998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.603 [2024-12-16 10:08:46.563009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.563022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1deacb0 is same with the state(5) to be set 00:21:54.603 [2024-12-16 10:08:46.563841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.563869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.563894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.563909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.563924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.563938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.563953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.563966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.563981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.563995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.564009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.564022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.564037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.564050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.564065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.564078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.564093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.564106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.564121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.564134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.564149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.564162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.564177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.564190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.564205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.564231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.564248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.564277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.564291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.564304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.603 [2024-12-16 10:08:46.564329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.603 [2024-12-16 10:08:46.564342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.564356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.564404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.564420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.564434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.564450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.564463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.564479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.564507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.564546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.604 [2024-12-16 10:08:46.564560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.564596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.604 [2024-12-16 10:08:46.564609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.564625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.604 [2024-12-16 10:08:46.564650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.564665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.564679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.564710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.564747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.564797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.604 [2024-12-16 10:08:46.564828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.564842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.564854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.564868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.604 [2024-12-16 10:08:46.564879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.564899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.604 [2024-12-16 10:08:46.564911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.564946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.564958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.564973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.564993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.604 [2024-12-16 10:08:46.565055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.604 [2024-12-16 10:08:46.565356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.604 [2024-12-16 10:08:46.565451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.604 [2024-12-16 10:08:46.565536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.604 [2024-12-16 10:08:46.565716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.604 [2024-12-16 10:08:46.565875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.604 [2024-12-16 10:08:46.565888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.604 [2024-12-16 10:08:46.565900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.565913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.565925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.565945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.565957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.565970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.565992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.605 [2024-12-16 10:08:46.566088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.605 [2024-12-16 10:08:46.566118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.605 [2024-12-16 10:08:46.566186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.605 [2024-12-16 10:08:46.566813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.605 [2024-12-16 10:08:46.566868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.605 [2024-12-16 10:08:46.566899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.605 [2024-12-16 10:08:46.566924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.605 [2024-12-16 10:08:46.566948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.566982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.566995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.567012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.567026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.605 [2024-12-16 10:08:46.567039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.567052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.567064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.567078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.567090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.605 [2024-12-16 10:08:46.567107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.605 [2024-12-16 10:08:46.567120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.567145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.606 [2024-12-16 10:08:46.567182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.606 [2024-12-16 10:08:46.567207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.606 [2024-12-16 10:08:46.567232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.606 [2024-12-16 10:08:46.567257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.567282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.606 [2024-12-16 10:08:46.567306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.606 [2024-12-16 10:08:46.567331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.606 [2024-12-16 10:08:46.567378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.606 [2024-12-16 10:08:46.567438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.606 [2024-12-16 10:08:46.567487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.606 [2024-12-16 10:08:46.567514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.606 [2024-12-16 10:08:46.567557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.567594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.567621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.567654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.567681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.567708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.567751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.567792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.567839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.567866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.606 [2024-12-16 10:08:46.567891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.567916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.567941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.606 [2024-12-16 10:08:46.567966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.567979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.606 [2024-12-16 10:08:46.568006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.568020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.568032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.568045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.568062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.568077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.568089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.568103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.568133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.568148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.568160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.568197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.568215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.568235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.606 [2024-12-16 10:08:46.568249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.568263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71210 is same with the state(5) to be set 00:21:54.606 [2024-12-16 10:08:46.568278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:54.606 [2024-12-16 10:08:46.568288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:54.606 [2024-12-16 10:08:46.568298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73376 len:8 PRP1 0x0 PRP2 0x0 00:21:54.606 [2024-12-16 10:08:46.568310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.606 [2024-12-16 10:08:46.568365] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e71210 was disconnected and freed. reset controller. 00:21:54.606 [2024-12-16 10:08:46.568382] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:54.606 [2024-12-16 10:08:46.568395] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:54.606 [2024-12-16 10:08:46.570601] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:54.606 [2024-12-16 10:08:46.570638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deacb0 (9): Bad file descriptor 00:21:54.606 [2024-12-16 10:08:46.603962] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:54.606 00:21:54.606 Latency(us) 00:21:54.606 [2024-12-16T10:08:53.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.606 [2024-12-16T10:08:53.231Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:54.606 Verification LBA range: start 0x0 length 0x4000 00:21:54.606 NVMe0n1 : 15.01 14299.41 55.86 317.55 0.00 8741.06 606.95 16681.89 00:21:54.606 [2024-12-16T10:08:53.231Z] =================================================================================================================== 00:21:54.606 [2024-12-16T10:08:53.232Z] Total : 14299.41 55.86 317.55 0.00 8741.06 606.95 16681.89 00:21:54.607 Received shutdown signal, test time was about 15.000000 seconds 00:21:54.607 00:21:54.607 Latency(us) 00:21:54.607 [2024-12-16T10:08:53.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.607 [2024-12-16T10:08:53.232Z] =================================================================================================================== 00:21:54.607 [2024-12-16T10:08:53.232Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.607 10:08:52 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:54.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.607 10:08:52 -- host/failover.sh@65 -- # count=3 00:21:54.607 10:08:52 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:54.607 10:08:52 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:54.607 10:08:52 -- host/failover.sh@73 -- # bdevperf_pid=95778 00:21:54.607 10:08:52 -- host/failover.sh@75 -- # waitforlisten 95778 /var/tmp/bdevperf.sock 00:21:54.607 10:08:52 -- common/autotest_common.sh@829 -- # '[' -z 95778 ']' 00:21:54.607 10:08:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.607 10:08:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:54.607 10:08:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.607 10:08:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:54.607 10:08:52 -- common/autotest_common.sh@10 -- # set +x 00:21:55.174 10:08:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.174 10:08:53 -- common/autotest_common.sh@862 -- # return 0 00:21:55.174 10:08:53 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:55.174 [2024-12-16 10:08:53.793807] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:55.432 10:08:53 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:55.691 [2024-12-16 10:08:54.058010] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:55.691 10:08:54 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:55.950 NVMe0n1 00:21:55.950 10:08:54 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:56.208 00:21:56.208 10:08:54 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:56.467 00:21:56.467 10:08:54 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:56.467 10:08:54 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:56.729 10:08:55 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:56.987 10:08:55 -- host/failover.sh@87 -- # sleep 3 00:22:00.273 10:08:58 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:00.273 10:08:58 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.273 10:08:58 -- host/failover.sh@90 -- # run_test_pid=95925 00:22:00.273 10:08:58 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:00.273 10:08:58 -- host/failover.sh@92 -- # wait 95925 00:22:01.650 0 00:22:01.650 10:08:59 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:01.650 [2024-12-16 10:08:52.568427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:01.650 [2024-12-16 10:08:52.568550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95778 ] 00:22:01.650 [2024-12-16 10:08:52.704497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.650 [2024-12-16 10:08:52.769234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.650 [2024-12-16 10:08:55.459922] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:01.650 [2024-12-16 10:08:55.460039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.650 [2024-12-16 10:08:55.460064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.650 [2024-12-16 10:08:55.460081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.650 [2024-12-16 10:08:55.460094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.650 [2024-12-16 10:08:55.460107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.650 [2024-12-16 10:08:55.460120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.650 [2024-12-16 10:08:55.460133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.650 [2024-12-16 10:08:55.460145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.650 [2024-12-16 10:08:55.460158] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.650 [2024-12-16 10:08:55.460218] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.650 [2024-12-16 10:08:55.460247] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2384cb0 (9): Bad file descriptor 00:22:01.650 [2024-12-16 10:08:55.463493] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:01.650 Running I/O for 1 seconds... 00:22:01.650 00:22:01.650 Latency(us) 00:22:01.650 [2024-12-16T10:09:00.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.650 [2024-12-16T10:09:00.275Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:01.650 Verification LBA range: start 0x0 length 0x4000 00:22:01.650 NVMe0n1 : 1.01 14975.59 58.50 0.00 0.00 8509.88 1094.75 10187.87 00:22:01.650 [2024-12-16T10:09:00.275Z] =================================================================================================================== 00:22:01.650 [2024-12-16T10:09:00.275Z] Total : 14975.59 58.50 0.00 0.00 8509.88 1094.75 10187.87 00:22:01.650 10:08:59 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:01.650 10:08:59 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:01.650 10:09:00 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:01.908 10:09:00 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:01.908 10:09:00 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:02.167 10:09:00 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:02.426 10:09:00 -- host/failover.sh@101 -- # sleep 3 00:22:05.713 10:09:03 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:05.713 10:09:03 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:05.713 10:09:04 -- host/failover.sh@108 -- # killprocess 95778 00:22:05.713 10:09:04 -- common/autotest_common.sh@936 -- # '[' -z 95778 ']' 00:22:05.713 10:09:04 -- common/autotest_common.sh@940 -- # kill -0 95778 00:22:05.713 10:09:04 -- common/autotest_common.sh@941 -- # uname 00:22:05.713 10:09:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:05.713 10:09:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95778 00:22:05.713 killing process with pid 95778 00:22:05.713 10:09:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:05.713 10:09:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:05.713 10:09:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95778' 00:22:05.713 10:09:04 -- common/autotest_common.sh@955 -- # kill 95778 00:22:05.713 10:09:04 -- common/autotest_common.sh@960 -- # wait 95778 00:22:05.972 10:09:04 -- host/failover.sh@110 -- # sync 00:22:05.972 10:09:04 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:06.230 10:09:04 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:06.230 10:09:04 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:06.230 10:09:04 -- host/failover.sh@116 -- # nvmftestfini 00:22:06.230 10:09:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:06.230 10:09:04 -- nvmf/common.sh@116 -- # sync 00:22:06.230 10:09:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:06.230 10:09:04 -- nvmf/common.sh@119 -- # set +e 00:22:06.230 10:09:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:06.230 10:09:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:06.230 rmmod nvme_tcp 00:22:06.230 rmmod nvme_fabrics 00:22:06.230 rmmod nvme_keyring 00:22:06.230 10:09:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:06.230 10:09:04 -- nvmf/common.sh@123 -- # set -e 00:22:06.230 10:09:04 -- nvmf/common.sh@124 -- # return 0 00:22:06.230 10:09:04 -- nvmf/common.sh@477 -- # '[' -n 95420 ']' 00:22:06.230 10:09:04 -- nvmf/common.sh@478 -- # killprocess 95420 00:22:06.230 10:09:04 -- common/autotest_common.sh@936 -- # '[' -z 95420 ']' 00:22:06.230 10:09:04 -- common/autotest_common.sh@940 -- # kill -0 95420 00:22:06.230 10:09:04 -- common/autotest_common.sh@941 -- # uname 00:22:06.230 10:09:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:06.230 10:09:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95420 00:22:06.230 killing process with pid 95420 00:22:06.230 10:09:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:06.230 10:09:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:06.230 10:09:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95420' 00:22:06.230 10:09:04 -- common/autotest_common.sh@955 -- # kill 95420 00:22:06.230 10:09:04 -- common/autotest_common.sh@960 -- # wait 95420 00:22:06.489 10:09:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:06.489 10:09:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:06.489 10:09:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:06.489 10:09:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:06.489 10:09:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:06.489 10:09:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.489 10:09:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.489 10:09:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.489 10:09:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:06.489 ************************************ 00:22:06.489 END TEST nvmf_failover 00:22:06.489 ************************************ 00:22:06.489 00:22:06.489 real 0m32.676s 00:22:06.489 user 2m6.831s 00:22:06.489 sys 0m4.811s 00:22:06.489 10:09:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:06.489 10:09:05 -- common/autotest_common.sh@10 -- # set +x 00:22:06.489 10:09:05 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:06.489 10:09:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:06.489 10:09:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:06.489 10:09:05 -- common/autotest_common.sh@10 -- # set +x 00:22:06.747 ************************************ 00:22:06.747 START TEST nvmf_discovery 00:22:06.747 ************************************ 00:22:06.747 10:09:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:06.747 * Looking for test storage... 00:22:06.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:06.747 10:09:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:06.747 10:09:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:06.747 10:09:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:06.747 10:09:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:06.747 10:09:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:06.747 10:09:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:06.747 10:09:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:06.747 10:09:05 -- scripts/common.sh@335 -- # IFS=.-: 00:22:06.747 10:09:05 -- scripts/common.sh@335 -- # read -ra ver1 00:22:06.747 10:09:05 -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.747 10:09:05 -- scripts/common.sh@336 -- # read -ra ver2 00:22:06.747 10:09:05 -- scripts/common.sh@337 -- # local 'op=<' 00:22:06.747 10:09:05 -- scripts/common.sh@339 -- # ver1_l=2 00:22:06.747 10:09:05 -- scripts/common.sh@340 -- # ver2_l=1 00:22:06.747 10:09:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:06.747 10:09:05 -- scripts/common.sh@343 -- # case "$op" in 00:22:06.747 10:09:05 -- scripts/common.sh@344 -- # : 1 00:22:06.748 10:09:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:06.748 10:09:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.748 10:09:05 -- scripts/common.sh@364 -- # decimal 1 00:22:06.748 10:09:05 -- scripts/common.sh@352 -- # local d=1 00:22:06.748 10:09:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.748 10:09:05 -- scripts/common.sh@354 -- # echo 1 00:22:06.748 10:09:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:06.748 10:09:05 -- scripts/common.sh@365 -- # decimal 2 00:22:06.748 10:09:05 -- scripts/common.sh@352 -- # local d=2 00:22:06.748 10:09:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.748 10:09:05 -- scripts/common.sh@354 -- # echo 2 00:22:06.748 10:09:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:06.748 10:09:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:06.748 10:09:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:06.748 10:09:05 -- scripts/common.sh@367 -- # return 0 00:22:06.748 10:09:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.748 10:09:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:06.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.748 --rc genhtml_branch_coverage=1 00:22:06.748 --rc genhtml_function_coverage=1 00:22:06.748 --rc genhtml_legend=1 00:22:06.748 --rc geninfo_all_blocks=1 00:22:06.748 --rc geninfo_unexecuted_blocks=1 00:22:06.748 00:22:06.748 ' 00:22:06.748 10:09:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:06.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.748 --rc genhtml_branch_coverage=1 00:22:06.748 --rc genhtml_function_coverage=1 00:22:06.748 --rc genhtml_legend=1 00:22:06.748 --rc geninfo_all_blocks=1 00:22:06.748 --rc geninfo_unexecuted_blocks=1 00:22:06.748 00:22:06.748 ' 00:22:06.748 10:09:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:06.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.748 --rc genhtml_branch_coverage=1 00:22:06.748 --rc genhtml_function_coverage=1 00:22:06.748 --rc genhtml_legend=1 00:22:06.748 --rc geninfo_all_blocks=1 00:22:06.748 --rc geninfo_unexecuted_blocks=1 00:22:06.748 00:22:06.748 ' 00:22:06.748 10:09:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:06.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.748 --rc genhtml_branch_coverage=1 00:22:06.748 --rc genhtml_function_coverage=1 00:22:06.748 --rc genhtml_legend=1 00:22:06.748 --rc geninfo_all_blocks=1 00:22:06.748 --rc geninfo_unexecuted_blocks=1 00:22:06.748 00:22:06.748 ' 00:22:06.748 10:09:05 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:06.748 10:09:05 -- nvmf/common.sh@7 -- # uname -s 00:22:06.748 10:09:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.748 10:09:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.748 10:09:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.748 10:09:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.748 10:09:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.748 10:09:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.748 10:09:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.748 10:09:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.748 10:09:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.748 10:09:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.748 10:09:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:22:06.748 10:09:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:22:06.748 10:09:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.748 10:09:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.748 10:09:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:06.748 10:09:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:06.748 10:09:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.748 10:09:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.748 10:09:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.748 10:09:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.748 10:09:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.748 10:09:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.748 10:09:05 -- paths/export.sh@5 -- # export PATH 00:22:06.748 10:09:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.748 10:09:05 -- nvmf/common.sh@46 -- # : 0 00:22:06.748 10:09:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:06.748 10:09:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:06.748 10:09:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:06.748 10:09:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.748 10:09:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.748 10:09:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:06.748 10:09:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:06.748 10:09:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:06.748 10:09:05 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:06.748 10:09:05 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:06.748 10:09:05 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:06.748 10:09:05 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:06.748 10:09:05 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:06.748 10:09:05 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:06.748 10:09:05 -- host/discovery.sh@25 -- # nvmftestinit 00:22:06.748 10:09:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:06.748 10:09:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.748 10:09:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:06.748 10:09:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:06.748 10:09:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:06.748 10:09:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.748 10:09:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.748 10:09:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.748 10:09:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:06.748 10:09:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:06.748 10:09:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:06.748 10:09:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:06.748 10:09:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:06.748 10:09:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:06.748 10:09:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.748 10:09:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.748 10:09:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:06.748 10:09:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:06.748 10:09:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:06.748 10:09:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:06.748 10:09:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:06.748 10:09:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.748 10:09:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:06.748 10:09:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:06.748 10:09:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:06.748 10:09:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:06.748 10:09:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:06.748 10:09:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:06.748 Cannot find device "nvmf_tgt_br" 00:22:06.748 10:09:05 -- nvmf/common.sh@154 -- # true 00:22:06.748 10:09:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:06.748 Cannot find device "nvmf_tgt_br2" 00:22:06.748 10:09:05 -- nvmf/common.sh@155 -- # true 00:22:06.748 10:09:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:06.748 10:09:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:06.748 Cannot find device "nvmf_tgt_br" 00:22:06.748 10:09:05 -- nvmf/common.sh@157 -- # true 00:22:06.748 10:09:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:06.748 Cannot find device "nvmf_tgt_br2" 00:22:06.748 10:09:05 -- nvmf/common.sh@158 -- # true 00:22:06.748 10:09:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:07.007 10:09:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:07.007 10:09:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:07.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:07.007 10:09:05 -- nvmf/common.sh@161 -- # true 00:22:07.007 10:09:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:07.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:07.007 10:09:05 -- nvmf/common.sh@162 -- # true 00:22:07.007 10:09:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:07.007 10:09:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:07.007 10:09:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:07.007 10:09:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:07.007 10:09:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:07.007 10:09:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:07.007 10:09:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:07.007 10:09:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:07.007 10:09:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:07.007 10:09:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:07.007 10:09:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:07.007 10:09:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:07.007 10:09:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:07.007 10:09:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:07.007 10:09:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:07.007 10:09:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:07.007 10:09:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:07.007 10:09:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:07.007 10:09:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:07.007 10:09:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:07.007 10:09:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:07.007 10:09:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:07.007 10:09:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:07.007 10:09:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:07.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:22:07.007 00:22:07.007 --- 10.0.0.2 ping statistics --- 00:22:07.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.007 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:22:07.007 10:09:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:07.007 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:07.007 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:22:07.007 00:22:07.007 --- 10.0.0.3 ping statistics --- 00:22:07.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.007 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:22:07.007 10:09:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:07.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:22:07.007 00:22:07.007 --- 10.0.0.1 ping statistics --- 00:22:07.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.007 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:22:07.007 10:09:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.007 10:09:05 -- nvmf/common.sh@421 -- # return 0 00:22:07.007 10:09:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:07.007 10:09:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.007 10:09:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:07.007 10:09:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:07.007 10:09:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.007 10:09:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:07.007 10:09:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:07.007 10:09:05 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:07.007 10:09:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:07.007 10:09:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:07.007 10:09:05 -- common/autotest_common.sh@10 -- # set +x 00:22:07.007 10:09:05 -- nvmf/common.sh@469 -- # nvmfpid=96238 00:22:07.007 10:09:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:07.007 10:09:05 -- nvmf/common.sh@470 -- # waitforlisten 96238 00:22:07.007 10:09:05 -- common/autotest_common.sh@829 -- # '[' -z 96238 ']' 00:22:07.007 10:09:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.007 10:09:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.007 10:09:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.007 10:09:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.007 10:09:05 -- common/autotest_common.sh@10 -- # set +x 00:22:07.266 [2024-12-16 10:09:05.641747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:07.266 [2024-12-16 10:09:05.641834] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.266 [2024-12-16 10:09:05.779771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.266 [2024-12-16 10:09:05.852522] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:07.266 [2024-12-16 10:09:05.852645] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.266 [2024-12-16 10:09:05.852657] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.266 [2024-12-16 10:09:05.852665] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.266 [2024-12-16 10:09:05.852689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.202 10:09:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:08.202 10:09:06 -- common/autotest_common.sh@862 -- # return 0 00:22:08.202 10:09:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:08.202 10:09:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:08.202 10:09:06 -- common/autotest_common.sh@10 -- # set +x 00:22:08.202 10:09:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.202 10:09:06 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:08.202 10:09:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.202 10:09:06 -- common/autotest_common.sh@10 -- # set +x 00:22:08.202 [2024-12-16 10:09:06.708972] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.202 10:09:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.202 10:09:06 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:08.202 10:09:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.202 10:09:06 -- common/autotest_common.sh@10 -- # set +x 00:22:08.202 [2024-12-16 10:09:06.721107] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:08.202 10:09:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.202 10:09:06 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:08.202 10:09:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.202 10:09:06 -- common/autotest_common.sh@10 -- # set +x 00:22:08.202 null0 00:22:08.202 10:09:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.202 10:09:06 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:08.202 10:09:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.202 10:09:06 -- common/autotest_common.sh@10 -- # set +x 00:22:08.202 null1 00:22:08.202 10:09:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.202 10:09:06 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:08.202 10:09:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.202 10:09:06 -- common/autotest_common.sh@10 -- # set +x 00:22:08.202 10:09:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.202 10:09:06 -- host/discovery.sh@45 -- # hostpid=96288 00:22:08.202 10:09:06 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:08.202 10:09:06 -- host/discovery.sh@46 -- # waitforlisten 96288 /tmp/host.sock 00:22:08.202 10:09:06 -- common/autotest_common.sh@829 -- # '[' -z 96288 ']' 00:22:08.202 10:09:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:08.202 10:09:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.202 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:08.202 10:09:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:08.202 10:09:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.202 10:09:06 -- common/autotest_common.sh@10 -- # set +x 00:22:08.203 [2024-12-16 10:09:06.809217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:08.203 [2024-12-16 10:09:06.809320] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96288 ] 00:22:08.462 [2024-12-16 10:09:06.953224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.462 [2024-12-16 10:09:07.030642] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:08.462 [2024-12-16 10:09:07.030861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.399 10:09:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.400 10:09:07 -- common/autotest_common.sh@862 -- # return 0 00:22:09.400 10:09:07 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:09.400 10:09:07 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:09.400 10:09:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.400 10:09:07 -- common/autotest_common.sh@10 -- # set +x 00:22:09.400 10:09:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.400 10:09:07 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:09.400 10:09:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.400 10:09:07 -- common/autotest_common.sh@10 -- # set +x 00:22:09.400 10:09:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.400 10:09:07 -- host/discovery.sh@72 -- # notify_id=0 00:22:09.400 10:09:07 -- host/discovery.sh@78 -- # get_subsystem_names 00:22:09.400 10:09:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:09.400 10:09:07 -- host/discovery.sh@59 -- # sort 00:22:09.400 10:09:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.400 10:09:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:09.400 10:09:07 -- common/autotest_common.sh@10 -- # set +x 00:22:09.400 10:09:07 -- host/discovery.sh@59 -- # xargs 00:22:09.400 10:09:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.400 10:09:07 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:22:09.400 10:09:07 -- host/discovery.sh@79 -- # get_bdev_list 00:22:09.400 10:09:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.400 10:09:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.400 10:09:07 -- common/autotest_common.sh@10 -- # set +x 00:22:09.400 10:09:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:09.400 10:09:07 -- host/discovery.sh@55 -- # sort 00:22:09.400 10:09:07 -- host/discovery.sh@55 -- # xargs 00:22:09.400 10:09:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.400 10:09:07 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:22:09.400 10:09:07 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:09.400 10:09:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.400 10:09:07 -- common/autotest_common.sh@10 -- # set +x 00:22:09.400 10:09:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.400 10:09:07 -- host/discovery.sh@82 -- # get_subsystem_names 00:22:09.400 10:09:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:09.400 10:09:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:09.400 10:09:07 -- host/discovery.sh@59 -- # sort 00:22:09.400 10:09:07 -- host/discovery.sh@59 -- # xargs 00:22:09.400 10:09:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.400 10:09:07 -- common/autotest_common.sh@10 -- # set +x 00:22:09.400 10:09:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.400 10:09:07 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:22:09.400 10:09:07 -- host/discovery.sh@83 -- # get_bdev_list 00:22:09.400 10:09:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.400 10:09:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.400 10:09:07 -- common/autotest_common.sh@10 -- # set +x 00:22:09.400 10:09:07 -- host/discovery.sh@55 -- # sort 00:22:09.400 10:09:07 -- host/discovery.sh@55 -- # xargs 00:22:09.400 10:09:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:09.400 10:09:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.400 10:09:08 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:09.400 10:09:08 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:09.400 10:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.400 10:09:08 -- common/autotest_common.sh@10 -- # set +x 00:22:09.400 10:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.400 10:09:08 -- host/discovery.sh@86 -- # get_subsystem_names 00:22:09.400 10:09:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:09.400 10:09:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:09.400 10:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.400 10:09:08 -- host/discovery.sh@59 -- # sort 00:22:09.400 10:09:08 -- common/autotest_common.sh@10 -- # set +x 00:22:09.400 10:09:08 -- host/discovery.sh@59 -- # xargs 00:22:09.659 10:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.659 10:09:08 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:22:09.659 10:09:08 -- host/discovery.sh@87 -- # get_bdev_list 00:22:09.659 10:09:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.659 10:09:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:09.659 10:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.659 10:09:08 -- host/discovery.sh@55 -- # xargs 00:22:09.659 10:09:08 -- host/discovery.sh@55 -- # sort 00:22:09.659 10:09:08 -- common/autotest_common.sh@10 -- # set +x 00:22:09.659 10:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.659 10:09:08 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:09.659 10:09:08 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:09.659 10:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.659 10:09:08 -- common/autotest_common.sh@10 -- # set +x 00:22:09.659 [2024-12-16 10:09:08.137452] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.659 10:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.659 10:09:08 -- host/discovery.sh@92 -- # get_subsystem_names 00:22:09.659 10:09:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:09.659 10:09:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:09.659 10:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.659 10:09:08 -- common/autotest_common.sh@10 -- # set +x 00:22:09.659 10:09:08 -- host/discovery.sh@59 -- # sort 00:22:09.659 10:09:08 -- host/discovery.sh@59 -- # xargs 00:22:09.659 10:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.659 10:09:08 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:09.659 10:09:08 -- host/discovery.sh@93 -- # get_bdev_list 00:22:09.659 10:09:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.659 10:09:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:09.659 10:09:08 -- host/discovery.sh@55 -- # sort 00:22:09.659 10:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.659 10:09:08 -- host/discovery.sh@55 -- # xargs 00:22:09.659 10:09:08 -- common/autotest_common.sh@10 -- # set +x 00:22:09.659 10:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.659 10:09:08 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:22:09.659 10:09:08 -- host/discovery.sh@94 -- # get_notification_count 00:22:09.659 10:09:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:09.659 10:09:08 -- host/discovery.sh@74 -- # jq '. | length' 00:22:09.659 10:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.659 10:09:08 -- common/autotest_common.sh@10 -- # set +x 00:22:09.659 10:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.918 10:09:08 -- host/discovery.sh@74 -- # notification_count=0 00:22:09.918 10:09:08 -- host/discovery.sh@75 -- # notify_id=0 00:22:09.918 10:09:08 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:22:09.918 10:09:08 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:09.918 10:09:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.918 10:09:08 -- common/autotest_common.sh@10 -- # set +x 00:22:09.918 10:09:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.918 10:09:08 -- host/discovery.sh@100 -- # sleep 1 00:22:10.177 [2024-12-16 10:09:08.784195] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:10.177 [2024-12-16 10:09:08.784241] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:10.177 [2024-12-16 10:09:08.784259] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:10.436 [2024-12-16 10:09:08.870319] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:10.436 [2024-12-16 10:09:08.925943] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:10.436 [2024-12-16 10:09:08.925970] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:11.005 10:09:09 -- host/discovery.sh@101 -- # get_subsystem_names 00:22:11.005 10:09:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.005 10:09:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.005 10:09:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.005 10:09:09 -- common/autotest_common.sh@10 -- # set +x 00:22:11.005 10:09:09 -- host/discovery.sh@59 -- # xargs 00:22:11.005 10:09:09 -- host/discovery.sh@59 -- # sort 00:22:11.005 10:09:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.005 10:09:09 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.005 10:09:09 -- host/discovery.sh@102 -- # get_bdev_list 00:22:11.005 10:09:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.005 10:09:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.005 10:09:09 -- common/autotest_common.sh@10 -- # set +x 00:22:11.005 10:09:09 -- host/discovery.sh@55 -- # sort 00:22:11.005 10:09:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:11.005 10:09:09 -- host/discovery.sh@55 -- # xargs 00:22:11.005 10:09:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.005 10:09:09 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:11.005 10:09:09 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:22:11.005 10:09:09 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:11.005 10:09:09 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:11.005 10:09:09 -- host/discovery.sh@63 -- # sort -n 00:22:11.005 10:09:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.005 10:09:09 -- common/autotest_common.sh@10 -- # set +x 00:22:11.005 10:09:09 -- host/discovery.sh@63 -- # xargs 00:22:11.005 10:09:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.005 10:09:09 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:22:11.005 10:09:09 -- host/discovery.sh@104 -- # get_notification_count 00:22:11.005 10:09:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:11.005 10:09:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.005 10:09:09 -- common/autotest_common.sh@10 -- # set +x 00:22:11.005 10:09:09 -- host/discovery.sh@74 -- # jq '. | length' 00:22:11.005 10:09:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.005 10:09:09 -- host/discovery.sh@74 -- # notification_count=1 00:22:11.005 10:09:09 -- host/discovery.sh@75 -- # notify_id=1 00:22:11.005 10:09:09 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:22:11.005 10:09:09 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:11.005 10:09:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.005 10:09:09 -- common/autotest_common.sh@10 -- # set +x 00:22:11.005 10:09:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.005 10:09:09 -- host/discovery.sh@109 -- # sleep 1 00:22:12.016 10:09:10 -- host/discovery.sh@110 -- # get_bdev_list 00:22:12.016 10:09:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.017 10:09:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.017 10:09:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:12.017 10:09:10 -- common/autotest_common.sh@10 -- # set +x 00:22:12.017 10:09:10 -- host/discovery.sh@55 -- # xargs 00:22:12.017 10:09:10 -- host/discovery.sh@55 -- # sort 00:22:12.017 10:09:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.017 10:09:10 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:12.017 10:09:10 -- host/discovery.sh@111 -- # get_notification_count 00:22:12.017 10:09:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:12.017 10:09:10 -- host/discovery.sh@74 -- # jq '. | length' 00:22:12.017 10:09:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.017 10:09:10 -- common/autotest_common.sh@10 -- # set +x 00:22:12.017 10:09:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.275 10:09:10 -- host/discovery.sh@74 -- # notification_count=1 00:22:12.275 10:09:10 -- host/discovery.sh@75 -- # notify_id=2 00:22:12.275 10:09:10 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:12.275 10:09:10 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:12.275 10:09:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.275 10:09:10 -- common/autotest_common.sh@10 -- # set +x 00:22:12.275 [2024-12-16 10:09:10.675577] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:12.275 [2024-12-16 10:09:10.676272] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:12.275 [2024-12-16 10:09:10.676299] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:12.275 10:09:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.275 10:09:10 -- host/discovery.sh@117 -- # sleep 1 00:22:12.275 [2024-12-16 10:09:10.762341] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:12.275 [2024-12-16 10:09:10.826642] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:12.275 [2024-12-16 10:09:10.826663] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:12.275 [2024-12-16 10:09:10.826669] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:13.212 10:09:11 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:13.212 10:09:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:13.212 10:09:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.212 10:09:11 -- host/discovery.sh@59 -- # sort 00:22:13.212 10:09:11 -- common/autotest_common.sh@10 -- # set +x 00:22:13.212 10:09:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:13.212 10:09:11 -- host/discovery.sh@59 -- # xargs 00:22:13.212 10:09:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.212 10:09:11 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.212 10:09:11 -- host/discovery.sh@119 -- # get_bdev_list 00:22:13.212 10:09:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:13.212 10:09:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.212 10:09:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.212 10:09:11 -- common/autotest_common.sh@10 -- # set +x 00:22:13.212 10:09:11 -- host/discovery.sh@55 -- # sort 00:22:13.212 10:09:11 -- host/discovery.sh@55 -- # xargs 00:22:13.212 10:09:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.212 10:09:11 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:13.212 10:09:11 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:13.212 10:09:11 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:13.212 10:09:11 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:13.212 10:09:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.212 10:09:11 -- host/discovery.sh@63 -- # sort -n 00:22:13.212 10:09:11 -- common/autotest_common.sh@10 -- # set +x 00:22:13.212 10:09:11 -- host/discovery.sh@63 -- # xargs 00:22:13.212 10:09:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.472 10:09:11 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:13.472 10:09:11 -- host/discovery.sh@121 -- # get_notification_count 00:22:13.472 10:09:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:13.472 10:09:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.472 10:09:11 -- host/discovery.sh@74 -- # jq '. | length' 00:22:13.472 10:09:11 -- common/autotest_common.sh@10 -- # set +x 00:22:13.472 10:09:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.472 10:09:11 -- host/discovery.sh@74 -- # notification_count=0 00:22:13.472 10:09:11 -- host/discovery.sh@75 -- # notify_id=2 00:22:13.472 10:09:11 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:13.472 10:09:11 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:13.472 10:09:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.472 10:09:11 -- common/autotest_common.sh@10 -- # set +x 00:22:13.472 [2024-12-16 10:09:11.912785] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:13.472 [2024-12-16 10:09:11.912812] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:13.472 [2024-12-16 10:09:11.913917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.472 [2024-12-16 10:09:11.913949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.472 [2024-12-16 10:09:11.913977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.472 [2024-12-16 10:09:11.913986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.472 [2024-12-16 10:09:11.913994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.472 [2024-12-16 10:09:11.914002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.472 [2024-12-16 10:09:11.914011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:13.472 [2024-12-16 10:09:11.914019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.472 [2024-12-16 10:09:11.914027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a570 is same with the state(5) to be set 00:22:13.472 10:09:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.472 10:09:11 -- host/discovery.sh@127 -- # sleep 1 00:22:13.472 [2024-12-16 10:09:11.923886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a570 (9): Bad file descriptor 00:22:13.472 [2024-12-16 10:09:11.933913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.472 [2024-12-16 10:09:11.934043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.472 [2024-12-16 10:09:11.934113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.472 [2024-12-16 10:09:11.934131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a570 with addr=10.0.0.2, port=4420 00:22:13.472 [2024-12-16 10:09:11.934142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a570 is same with the state(5) to be set 00:22:13.472 [2024-12-16 10:09:11.934161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a570 (9): Bad file descriptor 00:22:13.472 [2024-12-16 10:09:11.934177] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.472 [2024-12-16 10:09:11.934186] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:13.472 [2024-12-16 10:09:11.934196] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.472 [2024-12-16 10:09:11.934211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.472 [2024-12-16 10:09:11.943986] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.472 [2024-12-16 10:09:11.944059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.472 [2024-12-16 10:09:11.944102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.472 [2024-12-16 10:09:11.944117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a570 with addr=10.0.0.2, port=4420 00:22:13.472 [2024-12-16 10:09:11.944127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a570 is same with the state(5) to be set 00:22:13.472 [2024-12-16 10:09:11.944142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a570 (9): Bad file descriptor 00:22:13.472 [2024-12-16 10:09:11.944155] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.472 [2024-12-16 10:09:11.944163] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:13.472 [2024-12-16 10:09:11.944170] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.472 [2024-12-16 10:09:11.944183] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.472 [2024-12-16 10:09:11.954029] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.472 [2024-12-16 10:09:11.954131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.472 [2024-12-16 10:09:11.954177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.472 [2024-12-16 10:09:11.954193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a570 with addr=10.0.0.2, port=4420 00:22:13.472 [2024-12-16 10:09:11.954203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a570 is same with the state(5) to be set 00:22:13.472 [2024-12-16 10:09:11.954219] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a570 (9): Bad file descriptor 00:22:13.472 [2024-12-16 10:09:11.954232] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.472 [2024-12-16 10:09:11.954241] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:13.472 [2024-12-16 10:09:11.954249] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.472 [2024-12-16 10:09:11.954263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.472 [2024-12-16 10:09:11.964098] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.472 [2024-12-16 10:09:11.964195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.472 [2024-12-16 10:09:11.964238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.472 [2024-12-16 10:09:11.964253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a570 with addr=10.0.0.2, port=4420 00:22:13.472 [2024-12-16 10:09:11.964263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a570 is same with the state(5) to be set 00:22:13.472 [2024-12-16 10:09:11.964277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a570 (9): Bad file descriptor 00:22:13.472 [2024-12-16 10:09:11.964290] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.472 [2024-12-16 10:09:11.964297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:13.472 [2024-12-16 10:09:11.964305] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.472 [2024-12-16 10:09:11.964317] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.472 [2024-12-16 10:09:11.974165] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.472 [2024-12-16 10:09:11.974261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.472 [2024-12-16 10:09:11.974305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.472 [2024-12-16 10:09:11.974320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a570 with addr=10.0.0.2, port=4420 00:22:13.472 [2024-12-16 10:09:11.974330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a570 is same with the state(5) to be set 00:22:13.472 [2024-12-16 10:09:11.974345] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a570 (9): Bad file descriptor 00:22:13.472 [2024-12-16 10:09:11.974372] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.472 [2024-12-16 10:09:11.974380] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:13.472 [2024-12-16 10:09:11.974422] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.472 [2024-12-16 10:09:11.974451] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.472 [2024-12-16 10:09:11.984230] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.472 [2024-12-16 10:09:11.984550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.472 [2024-12-16 10:09:11.984600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.473 [2024-12-16 10:09:11.984617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a570 with addr=10.0.0.2, port=4420 00:22:13.473 [2024-12-16 10:09:11.984629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a570 is same with the state(5) to be set 00:22:13.473 [2024-12-16 10:09:11.984646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a570 (9): Bad file descriptor 00:22:13.473 [2024-12-16 10:09:11.984661] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.473 [2024-12-16 10:09:11.984670] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:13.473 [2024-12-16 10:09:11.984680] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.473 [2024-12-16 10:09:11.984695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.473 [2024-12-16 10:09:11.994509] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.473 [2024-12-16 10:09:11.994620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.473 [2024-12-16 10:09:11.994663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.473 [2024-12-16 10:09:11.994678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217a570 with addr=10.0.0.2, port=4420 00:22:13.473 [2024-12-16 10:09:11.994688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a570 is same with the state(5) to be set 00:22:13.473 [2024-12-16 10:09:11.994702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217a570 (9): Bad file descriptor 00:22:13.473 [2024-12-16 10:09:11.994715] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:13.473 [2024-12-16 10:09:11.994722] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:13.473 [2024-12-16 10:09:11.994730] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:13.473 [2024-12-16 10:09:11.994742] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:13.473 [2024-12-16 10:09:11.998891] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:13.473 [2024-12-16 10:09:11.998935] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:14.409 10:09:12 -- host/discovery.sh@128 -- # get_subsystem_names 00:22:14.409 10:09:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:14.409 10:09:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:14.409 10:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.409 10:09:12 -- common/autotest_common.sh@10 -- # set +x 00:22:14.409 10:09:12 -- host/discovery.sh@59 -- # sort 00:22:14.409 10:09:12 -- host/discovery.sh@59 -- # xargs 00:22:14.409 10:09:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.409 10:09:12 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.409 10:09:12 -- host/discovery.sh@129 -- # get_bdev_list 00:22:14.409 10:09:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.409 10:09:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.409 10:09:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.409 10:09:12 -- host/discovery.sh@55 -- # sort 00:22:14.409 10:09:12 -- common/autotest_common.sh@10 -- # set +x 00:22:14.409 10:09:12 -- host/discovery.sh@55 -- # xargs 00:22:14.409 10:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.668 10:09:13 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:14.668 10:09:13 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:22:14.668 10:09:13 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:14.668 10:09:13 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:14.668 10:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.668 10:09:13 -- host/discovery.sh@63 -- # sort -n 00:22:14.668 10:09:13 -- host/discovery.sh@63 -- # xargs 00:22:14.668 10:09:13 -- common/autotest_common.sh@10 -- # set +x 00:22:14.668 10:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.668 10:09:13 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:22:14.668 10:09:13 -- host/discovery.sh@131 -- # get_notification_count 00:22:14.668 10:09:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:14.668 10:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.668 10:09:13 -- host/discovery.sh@74 -- # jq '. | length' 00:22:14.668 10:09:13 -- common/autotest_common.sh@10 -- # set +x 00:22:14.668 10:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.668 10:09:13 -- host/discovery.sh@74 -- # notification_count=0 00:22:14.668 10:09:13 -- host/discovery.sh@75 -- # notify_id=2 00:22:14.668 10:09:13 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:22:14.668 10:09:13 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:14.668 10:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.668 10:09:13 -- common/autotest_common.sh@10 -- # set +x 00:22:14.668 10:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.668 10:09:13 -- host/discovery.sh@135 -- # sleep 1 00:22:15.604 10:09:14 -- host/discovery.sh@136 -- # get_subsystem_names 00:22:15.604 10:09:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:15.604 10:09:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:15.604 10:09:14 -- host/discovery.sh@59 -- # sort 00:22:15.604 10:09:14 -- host/discovery.sh@59 -- # xargs 00:22:15.604 10:09:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.604 10:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:15.604 10:09:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.604 10:09:14 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:22:15.604 10:09:14 -- host/discovery.sh@137 -- # get_bdev_list 00:22:15.604 10:09:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:15.604 10:09:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:15.604 10:09:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.604 10:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:15.604 10:09:14 -- host/discovery.sh@55 -- # sort 00:22:15.604 10:09:14 -- host/discovery.sh@55 -- # xargs 00:22:15.604 10:09:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.863 10:09:14 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:22:15.863 10:09:14 -- host/discovery.sh@138 -- # get_notification_count 00:22:15.863 10:09:14 -- host/discovery.sh@74 -- # jq '. | length' 00:22:15.863 10:09:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:15.863 10:09:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.863 10:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:15.863 10:09:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.863 10:09:14 -- host/discovery.sh@74 -- # notification_count=2 00:22:15.863 10:09:14 -- host/discovery.sh@75 -- # notify_id=4 00:22:15.863 10:09:14 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:22:15.863 10:09:14 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:15.863 10:09:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.863 10:09:14 -- common/autotest_common.sh@10 -- # set +x 00:22:16.799 [2024-12-16 10:09:15.345199] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:16.799 [2024-12-16 10:09:15.345225] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:16.799 [2024-12-16 10:09:15.345257] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:17.058 [2024-12-16 10:09:15.431281] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:17.058 [2024-12-16 10:09:15.490445] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:17.058 [2024-12-16 10:09:15.490501] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:17.058 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.058 10:09:15 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:17.058 10:09:15 -- common/autotest_common.sh@650 -- # local es=0 00:22:17.058 10:09:15 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:17.058 10:09:15 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:17.058 10:09:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.058 10:09:15 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:17.058 10:09:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.058 10:09:15 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:17.058 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.058 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:22:17.058 2024/12/16 10:09:15 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:17.058 request: 00:22:17.058 { 00:22:17.058 "method": "bdev_nvme_start_discovery", 00:22:17.058 "params": { 00:22:17.058 "name": "nvme", 00:22:17.058 "trtype": "tcp", 00:22:17.058 "traddr": "10.0.0.2", 00:22:17.058 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:17.058 "adrfam": "ipv4", 00:22:17.058 "trsvcid": "8009", 00:22:17.058 "wait_for_attach": true 00:22:17.058 } 00:22:17.058 } 00:22:17.058 Got JSON-RPC error response 00:22:17.058 GoRPCClient: error on JSON-RPC call 00:22:17.058 10:09:15 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:17.058 10:09:15 -- common/autotest_common.sh@653 -- # es=1 00:22:17.058 10:09:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:17.058 10:09:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:17.058 10:09:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:17.058 10:09:15 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:22:17.058 10:09:15 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:17.058 10:09:15 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:17.058 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.058 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:22:17.058 10:09:15 -- host/discovery.sh@67 -- # xargs 00:22:17.058 10:09:15 -- host/discovery.sh@67 -- # sort 00:22:17.058 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.058 10:09:15 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:22:17.058 10:09:15 -- host/discovery.sh@147 -- # get_bdev_list 00:22:17.058 10:09:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:17.058 10:09:15 -- host/discovery.sh@55 -- # sort 00:22:17.058 10:09:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.058 10:09:15 -- host/discovery.sh@55 -- # xargs 00:22:17.058 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.058 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:22:17.058 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.058 10:09:15 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:17.058 10:09:15 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:17.058 10:09:15 -- common/autotest_common.sh@650 -- # local es=0 00:22:17.058 10:09:15 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:17.058 10:09:15 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:17.058 10:09:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.058 10:09:15 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:17.058 10:09:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.058 10:09:15 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:17.058 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.058 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:22:17.058 2024/12/16 10:09:15 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:17.058 request: 00:22:17.058 { 00:22:17.058 "method": "bdev_nvme_start_discovery", 00:22:17.058 "params": { 00:22:17.058 "name": "nvme_second", 00:22:17.058 "trtype": "tcp", 00:22:17.058 "traddr": "10.0.0.2", 00:22:17.058 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:17.058 "adrfam": "ipv4", 00:22:17.058 "trsvcid": "8009", 00:22:17.058 "wait_for_attach": true 00:22:17.058 } 00:22:17.058 } 00:22:17.058 Got JSON-RPC error response 00:22:17.058 GoRPCClient: error on JSON-RPC call 00:22:17.058 10:09:15 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:17.058 10:09:15 -- common/autotest_common.sh@653 -- # es=1 00:22:17.058 10:09:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:17.058 10:09:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:17.058 10:09:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:17.058 10:09:15 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:22:17.058 10:09:15 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:17.058 10:09:15 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:17.058 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.058 10:09:15 -- host/discovery.sh@67 -- # xargs 00:22:17.058 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:22:17.058 10:09:15 -- host/discovery.sh@67 -- # sort 00:22:17.058 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.317 10:09:15 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:22:17.317 10:09:15 -- host/discovery.sh@153 -- # get_bdev_list 00:22:17.317 10:09:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:17.317 10:09:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.317 10:09:15 -- host/discovery.sh@55 -- # sort 00:22:17.317 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.317 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:22:17.317 10:09:15 -- host/discovery.sh@55 -- # xargs 00:22:17.318 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.318 10:09:15 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:17.318 10:09:15 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:17.318 10:09:15 -- common/autotest_common.sh@650 -- # local es=0 00:22:17.318 10:09:15 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:17.318 10:09:15 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:17.318 10:09:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.318 10:09:15 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:17.318 10:09:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.318 10:09:15 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:17.318 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.318 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:22:18.254 [2024-12-16 10:09:16.756574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.254 [2024-12-16 10:09:16.756682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.254 [2024-12-16 10:09:16.756700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2215f80 with addr=10.0.0.2, port=8010 00:22:18.254 [2024-12-16 10:09:16.756719] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:18.254 [2024-12-16 10:09:16.756727] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:18.254 [2024-12-16 10:09:16.756735] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:19.190 [2024-12-16 10:09:17.756547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.190 [2024-12-16 10:09:17.756647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.190 [2024-12-16 10:09:17.756664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21eeca0 with addr=10.0.0.2, port=8010 00:22:19.190 [2024-12-16 10:09:17.756677] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:19.190 [2024-12-16 10:09:17.756686] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:19.190 [2024-12-16 10:09:17.756694] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:20.565 [2024-12-16 10:09:18.756478] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:20.565 2024/12/16 10:09:18 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:20.565 request: 00:22:20.565 { 00:22:20.565 "method": "bdev_nvme_start_discovery", 00:22:20.565 "params": { 00:22:20.565 "name": "nvme_second", 00:22:20.565 "trtype": "tcp", 00:22:20.565 "traddr": "10.0.0.2", 00:22:20.565 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:20.565 "adrfam": "ipv4", 00:22:20.565 "trsvcid": "8010", 00:22:20.565 "attach_timeout_ms": 3000 00:22:20.565 } 00:22:20.565 } 00:22:20.565 Got JSON-RPC error response 00:22:20.565 GoRPCClient: error on JSON-RPC call 00:22:20.565 10:09:18 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:20.565 10:09:18 -- common/autotest_common.sh@653 -- # es=1 00:22:20.565 10:09:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:20.565 10:09:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:20.565 10:09:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:20.565 10:09:18 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:20.565 10:09:18 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:20.565 10:09:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.565 10:09:18 -- common/autotest_common.sh@10 -- # set +x 00:22:20.565 10:09:18 -- host/discovery.sh@67 -- # sort 00:22:20.565 10:09:18 -- host/discovery.sh@67 -- # xargs 00:22:20.565 10:09:18 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:20.565 10:09:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.565 10:09:18 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:20.565 10:09:18 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:20.565 10:09:18 -- host/discovery.sh@162 -- # kill 96288 00:22:20.565 10:09:18 -- host/discovery.sh@163 -- # nvmftestfini 00:22:20.565 10:09:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:20.565 10:09:18 -- nvmf/common.sh@116 -- # sync 00:22:20.565 10:09:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:20.565 10:09:18 -- nvmf/common.sh@119 -- # set +e 00:22:20.565 10:09:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:20.565 10:09:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:20.565 rmmod nvme_tcp 00:22:20.565 rmmod nvme_fabrics 00:22:20.565 rmmod nvme_keyring 00:22:20.565 10:09:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:20.565 10:09:18 -- nvmf/common.sh@123 -- # set -e 00:22:20.565 10:09:18 -- nvmf/common.sh@124 -- # return 0 00:22:20.565 10:09:18 -- nvmf/common.sh@477 -- # '[' -n 96238 ']' 00:22:20.565 10:09:18 -- nvmf/common.sh@478 -- # killprocess 96238 00:22:20.566 10:09:18 -- common/autotest_common.sh@936 -- # '[' -z 96238 ']' 00:22:20.566 10:09:18 -- common/autotest_common.sh@940 -- # kill -0 96238 00:22:20.566 10:09:18 -- common/autotest_common.sh@941 -- # uname 00:22:20.566 10:09:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:20.566 10:09:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96238 00:22:20.566 killing process with pid 96238 00:22:20.566 10:09:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:20.566 10:09:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:20.566 10:09:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96238' 00:22:20.566 10:09:18 -- common/autotest_common.sh@955 -- # kill 96238 00:22:20.566 10:09:18 -- common/autotest_common.sh@960 -- # wait 96238 00:22:20.566 10:09:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:20.566 10:09:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:20.566 10:09:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:20.566 10:09:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.566 10:09:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:20.566 10:09:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.566 10:09:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.566 10:09:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.825 10:09:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:20.825 00:22:20.825 real 0m14.102s 00:22:20.825 user 0m27.666s 00:22:20.825 sys 0m1.701s 00:22:20.825 10:09:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:20.825 10:09:19 -- common/autotest_common.sh@10 -- # set +x 00:22:20.825 ************************************ 00:22:20.825 END TEST nvmf_discovery 00:22:20.825 ************************************ 00:22:20.825 10:09:19 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:20.825 10:09:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:20.825 10:09:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:20.825 10:09:19 -- common/autotest_common.sh@10 -- # set +x 00:22:20.825 ************************************ 00:22:20.825 START TEST nvmf_discovery_remove_ifc 00:22:20.825 ************************************ 00:22:20.825 10:09:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:20.825 * Looking for test storage... 00:22:20.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:20.825 10:09:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:20.825 10:09:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:20.825 10:09:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:20.825 10:09:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:20.825 10:09:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:20.825 10:09:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:20.825 10:09:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:20.825 10:09:19 -- scripts/common.sh@335 -- # IFS=.-: 00:22:20.825 10:09:19 -- scripts/common.sh@335 -- # read -ra ver1 00:22:20.825 10:09:19 -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.825 10:09:19 -- scripts/common.sh@336 -- # read -ra ver2 00:22:20.825 10:09:19 -- scripts/common.sh@337 -- # local 'op=<' 00:22:20.825 10:09:19 -- scripts/common.sh@339 -- # ver1_l=2 00:22:20.825 10:09:19 -- scripts/common.sh@340 -- # ver2_l=1 00:22:20.825 10:09:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:20.825 10:09:19 -- scripts/common.sh@343 -- # case "$op" in 00:22:20.825 10:09:19 -- scripts/common.sh@344 -- # : 1 00:22:20.825 10:09:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:20.825 10:09:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.825 10:09:19 -- scripts/common.sh@364 -- # decimal 1 00:22:20.825 10:09:19 -- scripts/common.sh@352 -- # local d=1 00:22:20.825 10:09:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.825 10:09:19 -- scripts/common.sh@354 -- # echo 1 00:22:20.825 10:09:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:20.825 10:09:19 -- scripts/common.sh@365 -- # decimal 2 00:22:20.825 10:09:19 -- scripts/common.sh@352 -- # local d=2 00:22:20.825 10:09:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.825 10:09:19 -- scripts/common.sh@354 -- # echo 2 00:22:20.825 10:09:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:20.825 10:09:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:20.825 10:09:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:20.825 10:09:19 -- scripts/common.sh@367 -- # return 0 00:22:20.825 10:09:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.825 10:09:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:20.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.825 --rc genhtml_branch_coverage=1 00:22:20.825 --rc genhtml_function_coverage=1 00:22:20.825 --rc genhtml_legend=1 00:22:20.825 --rc geninfo_all_blocks=1 00:22:20.825 --rc geninfo_unexecuted_blocks=1 00:22:20.825 00:22:20.825 ' 00:22:20.825 10:09:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:20.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.825 --rc genhtml_branch_coverage=1 00:22:20.825 --rc genhtml_function_coverage=1 00:22:20.825 --rc genhtml_legend=1 00:22:20.825 --rc geninfo_all_blocks=1 00:22:20.825 --rc geninfo_unexecuted_blocks=1 00:22:20.825 00:22:20.825 ' 00:22:20.825 10:09:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:20.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.825 --rc genhtml_branch_coverage=1 00:22:20.825 --rc genhtml_function_coverage=1 00:22:20.825 --rc genhtml_legend=1 00:22:20.825 --rc geninfo_all_blocks=1 00:22:20.825 --rc geninfo_unexecuted_blocks=1 00:22:20.825 00:22:20.825 ' 00:22:20.825 10:09:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:20.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.825 --rc genhtml_branch_coverage=1 00:22:20.825 --rc genhtml_function_coverage=1 00:22:20.825 --rc genhtml_legend=1 00:22:20.825 --rc geninfo_all_blocks=1 00:22:20.825 --rc geninfo_unexecuted_blocks=1 00:22:20.825 00:22:20.825 ' 00:22:20.825 10:09:19 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:20.825 10:09:19 -- nvmf/common.sh@7 -- # uname -s 00:22:20.825 10:09:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.825 10:09:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.825 10:09:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.825 10:09:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.825 10:09:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.825 10:09:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.825 10:09:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.825 10:09:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.825 10:09:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.825 10:09:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.084 10:09:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:22:21.084 10:09:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:22:21.084 10:09:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.084 10:09:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.084 10:09:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:21.084 10:09:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:21.084 10:09:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.084 10:09:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.084 10:09:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.084 10:09:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.084 10:09:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.085 10:09:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.085 10:09:19 -- paths/export.sh@5 -- # export PATH 00:22:21.085 10:09:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.085 10:09:19 -- nvmf/common.sh@46 -- # : 0 00:22:21.085 10:09:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:21.085 10:09:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:21.085 10:09:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:21.085 10:09:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.085 10:09:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.085 10:09:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:21.085 10:09:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:21.085 10:09:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:21.085 10:09:19 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:21.085 10:09:19 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:21.085 10:09:19 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:21.085 10:09:19 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:21.085 10:09:19 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:21.085 10:09:19 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:21.085 10:09:19 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:21.085 10:09:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:21.085 10:09:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.085 10:09:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:21.085 10:09:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:21.085 10:09:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:21.085 10:09:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.085 10:09:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.085 10:09:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.085 10:09:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:21.085 10:09:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:21.085 10:09:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:21.085 10:09:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:21.085 10:09:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:21.085 10:09:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:21.085 10:09:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.085 10:09:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.085 10:09:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:21.085 10:09:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:21.085 10:09:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:21.085 10:09:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:21.085 10:09:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:21.085 10:09:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.085 10:09:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:21.085 10:09:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:21.085 10:09:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:21.085 10:09:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:21.085 10:09:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:21.085 10:09:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:21.085 Cannot find device "nvmf_tgt_br" 00:22:21.085 10:09:19 -- nvmf/common.sh@154 -- # true 00:22:21.085 10:09:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:21.085 Cannot find device "nvmf_tgt_br2" 00:22:21.085 10:09:19 -- nvmf/common.sh@155 -- # true 00:22:21.085 10:09:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:21.085 10:09:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:21.085 Cannot find device "nvmf_tgt_br" 00:22:21.085 10:09:19 -- nvmf/common.sh@157 -- # true 00:22:21.085 10:09:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:21.085 Cannot find device "nvmf_tgt_br2" 00:22:21.085 10:09:19 -- nvmf/common.sh@158 -- # true 00:22:21.085 10:09:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:21.085 10:09:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:21.085 10:09:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:21.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:21.085 10:09:19 -- nvmf/common.sh@161 -- # true 00:22:21.085 10:09:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:21.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:21.085 10:09:19 -- nvmf/common.sh@162 -- # true 00:22:21.085 10:09:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:21.085 10:09:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:21.085 10:09:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:21.085 10:09:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:21.085 10:09:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:21.085 10:09:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:21.085 10:09:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:21.085 10:09:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:21.085 10:09:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:21.085 10:09:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:21.085 10:09:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:21.085 10:09:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:21.085 10:09:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:21.085 10:09:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:21.085 10:09:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:21.085 10:09:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:21.085 10:09:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:21.085 10:09:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:21.085 10:09:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:21.344 10:09:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:21.344 10:09:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:21.344 10:09:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:21.344 10:09:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:21.344 10:09:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:21.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:22:21.344 00:22:21.344 --- 10.0.0.2 ping statistics --- 00:22:21.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.344 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:22:21.344 10:09:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:21.344 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:21.344 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:22:21.344 00:22:21.344 --- 10.0.0.3 ping statistics --- 00:22:21.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.344 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:21.344 10:09:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:21.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:22:21.344 00:22:21.344 --- 10.0.0.1 ping statistics --- 00:22:21.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.344 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:22:21.344 10:09:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.344 10:09:19 -- nvmf/common.sh@421 -- # return 0 00:22:21.344 10:09:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:21.344 10:09:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.344 10:09:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:21.344 10:09:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:21.344 10:09:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.344 10:09:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:21.344 10:09:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:21.344 10:09:19 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:21.344 10:09:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:21.344 10:09:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:21.344 10:09:19 -- common/autotest_common.sh@10 -- # set +x 00:22:21.344 10:09:19 -- nvmf/common.sh@469 -- # nvmfpid=96802 00:22:21.344 10:09:19 -- nvmf/common.sh@470 -- # waitforlisten 96802 00:22:21.344 10:09:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:21.344 10:09:19 -- common/autotest_common.sh@829 -- # '[' -z 96802 ']' 00:22:21.344 10:09:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.344 10:09:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:21.344 10:09:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.344 10:09:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:21.344 10:09:19 -- common/autotest_common.sh@10 -- # set +x 00:22:21.344 [2024-12-16 10:09:19.843918] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:21.344 [2024-12-16 10:09:19.844020] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.603 [2024-12-16 10:09:19.978298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.603 [2024-12-16 10:09:20.049579] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:21.603 [2024-12-16 10:09:20.049741] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.603 [2024-12-16 10:09:20.049756] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.603 [2024-12-16 10:09:20.049764] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.603 [2024-12-16 10:09:20.049786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.538 10:09:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.538 10:09:20 -- common/autotest_common.sh@862 -- # return 0 00:22:22.538 10:09:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:22.538 10:09:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:22.538 10:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:22.538 10:09:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.538 10:09:20 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:22.538 10:09:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.538 10:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:22.538 [2024-12-16 10:09:20.889570] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.538 [2024-12-16 10:09:20.897727] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:22.538 null0 00:22:22.538 [2024-12-16 10:09:20.929634] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.538 10:09:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.538 10:09:20 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96852 00:22:22.538 10:09:20 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:22.538 10:09:20 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96852 /tmp/host.sock 00:22:22.538 10:09:20 -- common/autotest_common.sh@829 -- # '[' -z 96852 ']' 00:22:22.538 10:09:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:22.538 10:09:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.538 10:09:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:22.538 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:22.538 10:09:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.538 10:09:20 -- common/autotest_common.sh@10 -- # set +x 00:22:22.539 [2024-12-16 10:09:21.008850] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:22.539 [2024-12-16 10:09:21.008969] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96852 ] 00:22:22.539 [2024-12-16 10:09:21.148986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.798 [2024-12-16 10:09:21.213291] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:22.798 [2024-12-16 10:09:21.213496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.798 10:09:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.798 10:09:21 -- common/autotest_common.sh@862 -- # return 0 00:22:22.798 10:09:21 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:22.798 10:09:21 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:22.798 10:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.798 10:09:21 -- common/autotest_common.sh@10 -- # set +x 00:22:22.798 10:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.798 10:09:21 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:22.798 10:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.798 10:09:21 -- common/autotest_common.sh@10 -- # set +x 00:22:22.798 10:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.798 10:09:21 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:22.798 10:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.798 10:09:21 -- common/autotest_common.sh@10 -- # set +x 00:22:24.171 [2024-12-16 10:09:22.372543] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:24.171 [2024-12-16 10:09:22.372591] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:24.171 [2024-12-16 10:09:22.372610] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:24.171 [2024-12-16 10:09:22.458632] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:24.171 [2024-12-16 10:09:22.514192] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:24.171 [2024-12-16 10:09:22.514264] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:24.171 [2024-12-16 10:09:22.514291] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:24.171 [2024-12-16 10:09:22.514308] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:24.171 [2024-12-16 10:09:22.514332] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:24.171 10:09:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.171 [2024-12-16 10:09:22.521197] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x14a2da0 was disconnected and freed. delete nvme_qpair. 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:24.171 10:09:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.171 10:09:22 -- common/autotest_common.sh@10 -- # set +x 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:24.171 10:09:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:24.171 10:09:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.171 10:09:22 -- common/autotest_common.sh@10 -- # set +x 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:24.171 10:09:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:24.171 10:09:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:25.106 10:09:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:25.106 10:09:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.106 10:09:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:25.106 10:09:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.106 10:09:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:25.106 10:09:23 -- common/autotest_common.sh@10 -- # set +x 00:22:25.106 10:09:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:25.106 10:09:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.106 10:09:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:25.106 10:09:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:26.482 10:09:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:26.482 10:09:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.482 10:09:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:26.482 10:09:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.482 10:09:24 -- common/autotest_common.sh@10 -- # set +x 00:22:26.482 10:09:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:26.482 10:09:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:26.482 10:09:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.482 10:09:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:26.482 10:09:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:27.414 10:09:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:27.414 10:09:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.414 10:09:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.414 10:09:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:27.414 10:09:25 -- common/autotest_common.sh@10 -- # set +x 00:22:27.414 10:09:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:27.414 10:09:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:27.414 10:09:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.414 10:09:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:27.414 10:09:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:28.352 10:09:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:28.352 10:09:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:28.352 10:09:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.352 10:09:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:28.352 10:09:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:28.352 10:09:26 -- common/autotest_common.sh@10 -- # set +x 00:22:28.352 10:09:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:28.352 10:09:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.352 10:09:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:28.352 10:09:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:29.319 10:09:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:29.319 10:09:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.319 10:09:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.319 10:09:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:29.319 10:09:27 -- common/autotest_common.sh@10 -- # set +x 00:22:29.319 10:09:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:29.319 10:09:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:29.319 10:09:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.590 [2024-12-16 10:09:27.942317] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:29.590 [2024-12-16 10:09:27.942447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.590 [2024-12-16 10:09:27.942465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.590 [2024-12-16 10:09:27.942478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.590 [2024-12-16 10:09:27.942488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.590 [2024-12-16 10:09:27.942513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.590 [2024-12-16 10:09:27.942521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.590 [2024-12-16 10:09:27.942531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.590 [2024-12-16 10:09:27.942540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.590 [2024-12-16 10:09:27.942550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.590 [2024-12-16 10:09:27.942558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.590 [2024-12-16 10:09:27.942567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c690 is same with the state(5) to be set 00:22:29.590 [2024-12-16 10:09:27.952310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140c690 (9): Bad file descriptor 00:22:29.590 [2024-12-16 10:09:27.962339] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:29.590 10:09:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:29.590 10:09:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:30.526 10:09:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:30.526 10:09:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.526 10:09:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:30.526 10:09:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.526 10:09:28 -- common/autotest_common.sh@10 -- # set +x 00:22:30.526 10:09:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:30.526 10:09:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:30.526 [2024-12-16 10:09:29.000414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:31.462 [2024-12-16 10:09:30.019482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:31.462 [2024-12-16 10:09:30.019603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140c690 with addr=10.0.0.2, port=4420 00:22:31.462 [2024-12-16 10:09:30.019638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c690 is same with the state(5) to be set 00:22:31.462 [2024-12-16 10:09:30.019689] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:31.462 [2024-12-16 10:09:30.019711] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:31.462 [2024-12-16 10:09:30.019729] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:31.462 [2024-12-16 10:09:30.019748] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:31.462 [2024-12-16 10:09:30.020821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140c690 (9): Bad file descriptor 00:22:31.462 [2024-12-16 10:09:30.021139] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.462 [2024-12-16 10:09:30.021209] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:31.462 [2024-12-16 10:09:30.021278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.462 [2024-12-16 10:09:30.021308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.462 [2024-12-16 10:09:30.021333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.462 [2024-12-16 10:09:30.021394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.462 [2024-12-16 10:09:30.021420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.462 [2024-12-16 10:09:30.021442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.462 [2024-12-16 10:09:30.021463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.462 [2024-12-16 10:09:30.021482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.462 [2024-12-16 10:09:30.021503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.462 [2024-12-16 10:09:30.021522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.462 [2024-12-16 10:09:30.021541] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:31.462 [2024-12-16 10:09:30.021574] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146a410 (9): Bad file descriptor 00:22:31.462 [2024-12-16 10:09:30.022165] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:31.462 [2024-12-16 10:09:30.022196] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:31.462 10:09:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.462 10:09:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:31.462 10:09:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:32.840 10:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:32.840 10:09:31 -- common/autotest_common.sh@10 -- # set +x 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:32.840 10:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.840 10:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.840 10:09:31 -- common/autotest_common.sh@10 -- # set +x 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:32.840 10:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:32.840 10:09:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:33.776 [2024-12-16 10:09:32.034215] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:33.776 [2024-12-16 10:09:32.034238] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:33.776 [2024-12-16 10:09:32.034255] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:33.776 [2024-12-16 10:09:32.120299] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:33.776 [2024-12-16 10:09:32.175378] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:33.776 [2024-12-16 10:09:32.175632] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:33.776 [2024-12-16 10:09:32.175667] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:33.776 [2024-12-16 10:09:32.175685] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:33.776 [2024-12-16 10:09:32.175693] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:33.776 10:09:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:33.776 [2024-12-16 10:09:32.182816] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x14ab3e0 was disconnected and freed. delete nvme_qpair. 00:22:33.776 10:09:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:33.776 10:09:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:33.776 10:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.776 10:09:32 -- common/autotest_common.sh@10 -- # set +x 00:22:33.776 10:09:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:33.777 10:09:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:33.777 10:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.777 10:09:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:33.777 10:09:32 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:33.777 10:09:32 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96852 00:22:33.777 10:09:32 -- common/autotest_common.sh@936 -- # '[' -z 96852 ']' 00:22:33.777 10:09:32 -- common/autotest_common.sh@940 -- # kill -0 96852 00:22:33.777 10:09:32 -- common/autotest_common.sh@941 -- # uname 00:22:33.777 10:09:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:33.777 10:09:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96852 00:22:33.777 killing process with pid 96852 00:22:33.777 10:09:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:33.777 10:09:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:33.777 10:09:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96852' 00:22:33.777 10:09:32 -- common/autotest_common.sh@955 -- # kill 96852 00:22:33.777 10:09:32 -- common/autotest_common.sh@960 -- # wait 96852 00:22:34.036 10:09:32 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:34.036 10:09:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:34.036 10:09:32 -- nvmf/common.sh@116 -- # sync 00:22:34.036 10:09:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:34.036 10:09:32 -- nvmf/common.sh@119 -- # set +e 00:22:34.036 10:09:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:34.036 10:09:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:34.036 rmmod nvme_tcp 00:22:34.036 rmmod nvme_fabrics 00:22:34.036 rmmod nvme_keyring 00:22:34.036 10:09:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:34.036 10:09:32 -- nvmf/common.sh@123 -- # set -e 00:22:34.036 10:09:32 -- nvmf/common.sh@124 -- # return 0 00:22:34.036 10:09:32 -- nvmf/common.sh@477 -- # '[' -n 96802 ']' 00:22:34.036 10:09:32 -- nvmf/common.sh@478 -- # killprocess 96802 00:22:34.036 10:09:32 -- common/autotest_common.sh@936 -- # '[' -z 96802 ']' 00:22:34.036 10:09:32 -- common/autotest_common.sh@940 -- # kill -0 96802 00:22:34.036 10:09:32 -- common/autotest_common.sh@941 -- # uname 00:22:34.036 10:09:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:34.036 10:09:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96802 00:22:34.036 killing process with pid 96802 00:22:34.036 10:09:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:34.036 10:09:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:34.036 10:09:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96802' 00:22:34.036 10:09:32 -- common/autotest_common.sh@955 -- # kill 96802 00:22:34.036 10:09:32 -- common/autotest_common.sh@960 -- # wait 96802 00:22:34.295 10:09:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:34.295 10:09:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:34.295 10:09:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:34.295 10:09:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:34.295 10:09:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:34.295 10:09:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.295 10:09:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.295 10:09:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.295 10:09:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:34.295 00:22:34.295 real 0m13.595s 00:22:34.295 user 0m22.974s 00:22:34.295 sys 0m1.560s 00:22:34.295 10:09:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:34.295 ************************************ 00:22:34.295 END TEST nvmf_discovery_remove_ifc 00:22:34.295 10:09:32 -- common/autotest_common.sh@10 -- # set +x 00:22:34.295 ************************************ 00:22:34.295 10:09:32 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:34.295 10:09:32 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:34.295 10:09:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:34.295 10:09:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:34.295 10:09:32 -- common/autotest_common.sh@10 -- # set +x 00:22:34.295 ************************************ 00:22:34.295 START TEST nvmf_digest 00:22:34.295 ************************************ 00:22:34.295 10:09:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:34.554 * Looking for test storage... 00:22:34.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:34.554 10:09:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:34.554 10:09:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:34.554 10:09:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:34.554 10:09:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:34.554 10:09:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:34.554 10:09:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:34.554 10:09:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:34.554 10:09:33 -- scripts/common.sh@335 -- # IFS=.-: 00:22:34.554 10:09:33 -- scripts/common.sh@335 -- # read -ra ver1 00:22:34.554 10:09:33 -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.554 10:09:33 -- scripts/common.sh@336 -- # read -ra ver2 00:22:34.554 10:09:33 -- scripts/common.sh@337 -- # local 'op=<' 00:22:34.554 10:09:33 -- scripts/common.sh@339 -- # ver1_l=2 00:22:34.554 10:09:33 -- scripts/common.sh@340 -- # ver2_l=1 00:22:34.554 10:09:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:34.554 10:09:33 -- scripts/common.sh@343 -- # case "$op" in 00:22:34.554 10:09:33 -- scripts/common.sh@344 -- # : 1 00:22:34.554 10:09:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:34.554 10:09:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.554 10:09:33 -- scripts/common.sh@364 -- # decimal 1 00:22:34.554 10:09:33 -- scripts/common.sh@352 -- # local d=1 00:22:34.554 10:09:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.554 10:09:33 -- scripts/common.sh@354 -- # echo 1 00:22:34.554 10:09:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:34.554 10:09:33 -- scripts/common.sh@365 -- # decimal 2 00:22:34.554 10:09:33 -- scripts/common.sh@352 -- # local d=2 00:22:34.554 10:09:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.554 10:09:33 -- scripts/common.sh@354 -- # echo 2 00:22:34.554 10:09:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:34.554 10:09:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:34.554 10:09:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:34.554 10:09:33 -- scripts/common.sh@367 -- # return 0 00:22:34.554 10:09:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.554 10:09:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:34.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.554 --rc genhtml_branch_coverage=1 00:22:34.554 --rc genhtml_function_coverage=1 00:22:34.554 --rc genhtml_legend=1 00:22:34.554 --rc geninfo_all_blocks=1 00:22:34.555 --rc geninfo_unexecuted_blocks=1 00:22:34.555 00:22:34.555 ' 00:22:34.555 10:09:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:34.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.555 --rc genhtml_branch_coverage=1 00:22:34.555 --rc genhtml_function_coverage=1 00:22:34.555 --rc genhtml_legend=1 00:22:34.555 --rc geninfo_all_blocks=1 00:22:34.555 --rc geninfo_unexecuted_blocks=1 00:22:34.555 00:22:34.555 ' 00:22:34.555 10:09:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:34.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.555 --rc genhtml_branch_coverage=1 00:22:34.555 --rc genhtml_function_coverage=1 00:22:34.555 --rc genhtml_legend=1 00:22:34.555 --rc geninfo_all_blocks=1 00:22:34.555 --rc geninfo_unexecuted_blocks=1 00:22:34.555 00:22:34.555 ' 00:22:34.555 10:09:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:34.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.555 --rc genhtml_branch_coverage=1 00:22:34.555 --rc genhtml_function_coverage=1 00:22:34.555 --rc genhtml_legend=1 00:22:34.555 --rc geninfo_all_blocks=1 00:22:34.555 --rc geninfo_unexecuted_blocks=1 00:22:34.555 00:22:34.555 ' 00:22:34.555 10:09:33 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:34.555 10:09:33 -- nvmf/common.sh@7 -- # uname -s 00:22:34.555 10:09:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.555 10:09:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.555 10:09:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.555 10:09:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.555 10:09:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.555 10:09:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.555 10:09:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.555 10:09:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.555 10:09:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.555 10:09:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.555 10:09:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:22:34.555 10:09:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:22:34.555 10:09:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.555 10:09:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.555 10:09:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:34.555 10:09:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:34.555 10:09:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.555 10:09:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.555 10:09:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.555 10:09:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.555 10:09:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.555 10:09:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.555 10:09:33 -- paths/export.sh@5 -- # export PATH 00:22:34.555 10:09:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.555 10:09:33 -- nvmf/common.sh@46 -- # : 0 00:22:34.555 10:09:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:34.555 10:09:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:34.555 10:09:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:34.555 10:09:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.555 10:09:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.555 10:09:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:34.555 10:09:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:34.555 10:09:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:34.555 10:09:33 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:34.555 10:09:33 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:34.555 10:09:33 -- host/digest.sh@16 -- # runtime=2 00:22:34.555 10:09:33 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:34.555 10:09:33 -- host/digest.sh@132 -- # nvmftestinit 00:22:34.555 10:09:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:34.555 10:09:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.555 10:09:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:34.555 10:09:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:34.555 10:09:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:34.555 10:09:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.555 10:09:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.555 10:09:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.555 10:09:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:34.555 10:09:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:34.555 10:09:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:34.555 10:09:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:34.555 10:09:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:34.555 10:09:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:34.555 10:09:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.555 10:09:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.555 10:09:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:34.555 10:09:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:34.555 10:09:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:34.555 10:09:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:34.555 10:09:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:34.555 10:09:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.555 10:09:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:34.555 10:09:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:34.555 10:09:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:34.555 10:09:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:34.555 10:09:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:34.555 10:09:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:34.555 Cannot find device "nvmf_tgt_br" 00:22:34.555 10:09:33 -- nvmf/common.sh@154 -- # true 00:22:34.555 10:09:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:34.555 Cannot find device "nvmf_tgt_br2" 00:22:34.555 10:09:33 -- nvmf/common.sh@155 -- # true 00:22:34.555 10:09:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:34.555 10:09:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:34.814 Cannot find device "nvmf_tgt_br" 00:22:34.814 10:09:33 -- nvmf/common.sh@157 -- # true 00:22:34.814 10:09:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:34.814 Cannot find device "nvmf_tgt_br2" 00:22:34.814 10:09:33 -- nvmf/common.sh@158 -- # true 00:22:34.814 10:09:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:34.814 10:09:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:34.814 10:09:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:34.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:34.814 10:09:33 -- nvmf/common.sh@161 -- # true 00:22:34.814 10:09:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:34.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:34.814 10:09:33 -- nvmf/common.sh@162 -- # true 00:22:34.814 10:09:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:34.814 10:09:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:34.814 10:09:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:34.814 10:09:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:34.814 10:09:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:34.815 10:09:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:34.815 10:09:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:34.815 10:09:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:34.815 10:09:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:34.815 10:09:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:34.815 10:09:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:34.815 10:09:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:34.815 10:09:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:34.815 10:09:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:34.815 10:09:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:34.815 10:09:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:34.815 10:09:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:34.815 10:09:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:34.815 10:09:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:34.815 10:09:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:34.815 10:09:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:34.815 10:09:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:34.815 10:09:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:34.815 10:09:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:34.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:22:34.815 00:22:34.815 --- 10.0.0.2 ping statistics --- 00:22:34.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.815 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:34.815 10:09:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:34.815 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:34.815 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:22:34.815 00:22:34.815 --- 10.0.0.3 ping statistics --- 00:22:34.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.815 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:22:34.815 10:09:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:34.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:22:34.815 00:22:34.815 --- 10.0.0.1 ping statistics --- 00:22:34.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.815 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:22:34.815 10:09:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.815 10:09:33 -- nvmf/common.sh@421 -- # return 0 00:22:34.815 10:09:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:34.815 10:09:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.815 10:09:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:34.815 10:09:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:34.815 10:09:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.815 10:09:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:34.815 10:09:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:35.074 10:09:33 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:35.074 10:09:33 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:35.074 10:09:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:35.074 10:09:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:35.074 10:09:33 -- common/autotest_common.sh@10 -- # set +x 00:22:35.074 ************************************ 00:22:35.074 START TEST nvmf_digest_clean 00:22:35.074 ************************************ 00:22:35.074 10:09:33 -- common/autotest_common.sh@1114 -- # run_digest 00:22:35.074 10:09:33 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:35.074 10:09:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:35.074 10:09:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:35.074 10:09:33 -- common/autotest_common.sh@10 -- # set +x 00:22:35.074 10:09:33 -- nvmf/common.sh@469 -- # nvmfpid=97263 00:22:35.074 10:09:33 -- nvmf/common.sh@470 -- # waitforlisten 97263 00:22:35.074 10:09:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:35.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.074 10:09:33 -- common/autotest_common.sh@829 -- # '[' -z 97263 ']' 00:22:35.074 10:09:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.074 10:09:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.074 10:09:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.074 10:09:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.074 10:09:33 -- common/autotest_common.sh@10 -- # set +x 00:22:35.074 [2024-12-16 10:09:33.510019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:35.074 [2024-12-16 10:09:33.510133] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.074 [2024-12-16 10:09:33.648301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.333 [2024-12-16 10:09:33.708554] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:35.333 [2024-12-16 10:09:33.708723] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.333 [2024-12-16 10:09:33.708737] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.333 [2024-12-16 10:09:33.708745] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.333 [2024-12-16 10:09:33.708783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.333 10:09:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.333 10:09:33 -- common/autotest_common.sh@862 -- # return 0 00:22:35.333 10:09:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:35.333 10:09:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:35.333 10:09:33 -- common/autotest_common.sh@10 -- # set +x 00:22:35.333 10:09:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.333 10:09:33 -- host/digest.sh@120 -- # common_target_config 00:22:35.333 10:09:33 -- host/digest.sh@43 -- # rpc_cmd 00:22:35.333 10:09:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.333 10:09:33 -- common/autotest_common.sh@10 -- # set +x 00:22:35.333 null0 00:22:35.333 [2024-12-16 10:09:33.911328] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.333 [2024-12-16 10:09:33.935425] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.333 10:09:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.333 10:09:33 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:35.333 10:09:33 -- host/digest.sh@77 -- # local rw bs qd 00:22:35.333 10:09:33 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:35.333 10:09:33 -- host/digest.sh@80 -- # rw=randread 00:22:35.333 10:09:33 -- host/digest.sh@80 -- # bs=4096 00:22:35.333 10:09:33 -- host/digest.sh@80 -- # qd=128 00:22:35.333 10:09:33 -- host/digest.sh@82 -- # bperfpid=97295 00:22:35.333 10:09:33 -- host/digest.sh@83 -- # waitforlisten 97295 /var/tmp/bperf.sock 00:22:35.333 10:09:33 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:35.333 10:09:33 -- common/autotest_common.sh@829 -- # '[' -z 97295 ']' 00:22:35.333 10:09:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:35.333 10:09:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.333 10:09:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:35.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:35.333 10:09:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.333 10:09:33 -- common/autotest_common.sh@10 -- # set +x 00:22:35.592 [2024-12-16 10:09:34.001162] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:35.592 [2024-12-16 10:09:34.001476] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97295 ] 00:22:35.592 [2024-12-16 10:09:34.143462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.899 [2024-12-16 10:09:34.220175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.467 10:09:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.467 10:09:35 -- common/autotest_common.sh@862 -- # return 0 00:22:36.467 10:09:35 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:36.467 10:09:35 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:36.467 10:09:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:37.032 10:09:35 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.032 10:09:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.291 nvme0n1 00:22:37.291 10:09:35 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:37.291 10:09:35 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:37.291 Running I/O for 2 seconds... 00:22:39.196 00:22:39.196 Latency(us) 00:22:39.196 [2024-12-16T10:09:37.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.196 [2024-12-16T10:09:37.821Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:39.196 nvme0n1 : 2.01 21050.64 82.23 0.00 0.00 6075.38 2666.12 21686.46 00:22:39.196 [2024-12-16T10:09:37.821Z] =================================================================================================================== 00:22:39.196 [2024-12-16T10:09:37.821Z] Total : 21050.64 82.23 0.00 0.00 6075.38 2666.12 21686.46 00:22:39.196 0 00:22:39.196 10:09:37 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:39.196 10:09:37 -- host/digest.sh@92 -- # get_accel_stats 00:22:39.196 10:09:37 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:39.196 10:09:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:39.196 10:09:37 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:39.196 | select(.opcode=="crc32c") 00:22:39.196 | "\(.module_name) \(.executed)"' 00:22:39.763 10:09:38 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:39.763 10:09:38 -- host/digest.sh@93 -- # exp_module=software 00:22:39.763 10:09:38 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:39.763 10:09:38 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:39.763 10:09:38 -- host/digest.sh@97 -- # killprocess 97295 00:22:39.763 10:09:38 -- common/autotest_common.sh@936 -- # '[' -z 97295 ']' 00:22:39.763 10:09:38 -- common/autotest_common.sh@940 -- # kill -0 97295 00:22:39.763 10:09:38 -- common/autotest_common.sh@941 -- # uname 00:22:39.763 10:09:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:39.763 10:09:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97295 00:22:39.763 10:09:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:39.763 10:09:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:39.763 killing process with pid 97295 00:22:39.763 10:09:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97295' 00:22:39.763 10:09:38 -- common/autotest_common.sh@955 -- # kill 97295 00:22:39.763 Received shutdown signal, test time was about 2.000000 seconds 00:22:39.763 00:22:39.763 Latency(us) 00:22:39.763 [2024-12-16T10:09:38.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.763 [2024-12-16T10:09:38.388Z] =================================================================================================================== 00:22:39.763 [2024-12-16T10:09:38.388Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.763 10:09:38 -- common/autotest_common.sh@960 -- # wait 97295 00:22:39.763 10:09:38 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:39.763 10:09:38 -- host/digest.sh@77 -- # local rw bs qd 00:22:39.763 10:09:38 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:39.763 10:09:38 -- host/digest.sh@80 -- # rw=randread 00:22:39.763 10:09:38 -- host/digest.sh@80 -- # bs=131072 00:22:39.763 10:09:38 -- host/digest.sh@80 -- # qd=16 00:22:39.763 10:09:38 -- host/digest.sh@82 -- # bperfpid=97384 00:22:39.763 10:09:38 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:39.763 10:09:38 -- host/digest.sh@83 -- # waitforlisten 97384 /var/tmp/bperf.sock 00:22:39.763 10:09:38 -- common/autotest_common.sh@829 -- # '[' -z 97384 ']' 00:22:39.763 10:09:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:39.763 10:09:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:39.763 10:09:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:39.763 10:09:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.763 10:09:38 -- common/autotest_common.sh@10 -- # set +x 00:22:39.764 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:39.764 Zero copy mechanism will not be used. 00:22:39.764 [2024-12-16 10:09:38.378175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:39.764 [2024-12-16 10:09:38.378261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97384 ] 00:22:40.022 [2024-12-16 10:09:38.512944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.022 [2024-12-16 10:09:38.577044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.022 10:09:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.022 10:09:38 -- common/autotest_common.sh@862 -- # return 0 00:22:40.022 10:09:38 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:40.022 10:09:38 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:40.022 10:09:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:40.590 10:09:39 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:40.590 10:09:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:40.849 nvme0n1 00:22:40.849 10:09:39 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:40.849 10:09:39 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:40.849 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:40.849 Zero copy mechanism will not be used. 00:22:40.849 Running I/O for 2 seconds... 00:22:43.382 00:22:43.382 Latency(us) 00:22:43.382 [2024-12-16T10:09:42.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.382 [2024-12-16T10:09:42.007Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:43.382 nvme0n1 : 2.04 9060.31 1132.54 0.00 0.00 1730.17 629.29 42419.67 00:22:43.382 [2024-12-16T10:09:42.007Z] =================================================================================================================== 00:22:43.382 [2024-12-16T10:09:42.007Z] Total : 9060.31 1132.54 0.00 0.00 1730.17 629.29 42419.67 00:22:43.382 0 00:22:43.382 10:09:41 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:43.382 10:09:41 -- host/digest.sh@92 -- # get_accel_stats 00:22:43.382 10:09:41 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:43.382 10:09:41 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:43.382 | select(.opcode=="crc32c") 00:22:43.382 | "\(.module_name) \(.executed)"' 00:22:43.382 10:09:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:43.382 10:09:41 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:43.382 10:09:41 -- host/digest.sh@93 -- # exp_module=software 00:22:43.382 10:09:41 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:43.382 10:09:41 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:43.382 10:09:41 -- host/digest.sh@97 -- # killprocess 97384 00:22:43.382 10:09:41 -- common/autotest_common.sh@936 -- # '[' -z 97384 ']' 00:22:43.382 10:09:41 -- common/autotest_common.sh@940 -- # kill -0 97384 00:22:43.382 10:09:41 -- common/autotest_common.sh@941 -- # uname 00:22:43.382 10:09:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:43.382 10:09:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97384 00:22:43.382 10:09:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:43.382 10:09:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:43.382 10:09:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97384' 00:22:43.382 killing process with pid 97384 00:22:43.382 10:09:41 -- common/autotest_common.sh@955 -- # kill 97384 00:22:43.382 Received shutdown signal, test time was about 2.000000 seconds 00:22:43.382 00:22:43.382 Latency(us) 00:22:43.382 [2024-12-16T10:09:42.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.382 [2024-12-16T10:09:42.007Z] =================================================================================================================== 00:22:43.382 [2024-12-16T10:09:42.007Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.382 10:09:41 -- common/autotest_common.sh@960 -- # wait 97384 00:22:43.640 10:09:42 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:43.640 10:09:42 -- host/digest.sh@77 -- # local rw bs qd 00:22:43.640 10:09:42 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:43.641 10:09:42 -- host/digest.sh@80 -- # rw=randwrite 00:22:43.641 10:09:42 -- host/digest.sh@80 -- # bs=4096 00:22:43.641 10:09:42 -- host/digest.sh@80 -- # qd=128 00:22:43.641 10:09:42 -- host/digest.sh@82 -- # bperfpid=97462 00:22:43.641 10:09:42 -- host/digest.sh@83 -- # waitforlisten 97462 /var/tmp/bperf.sock 00:22:43.641 10:09:42 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:43.641 10:09:42 -- common/autotest_common.sh@829 -- # '[' -z 97462 ']' 00:22:43.641 10:09:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:43.641 10:09:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:43.641 10:09:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:43.641 10:09:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.641 10:09:42 -- common/autotest_common.sh@10 -- # set +x 00:22:43.641 [2024-12-16 10:09:42.076077] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:43.641 [2024-12-16 10:09:42.076209] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97462 ] 00:22:43.641 [2024-12-16 10:09:42.215118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.898 [2024-12-16 10:09:42.288277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.465 10:09:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.465 10:09:43 -- common/autotest_common.sh@862 -- # return 0 00:22:44.465 10:09:43 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:44.465 10:09:43 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:44.465 10:09:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:45.033 10:09:43 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.033 10:09:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.291 nvme0n1 00:22:45.291 10:09:43 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:45.291 10:09:43 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:45.291 Running I/O for 2 seconds... 00:22:47.192 00:22:47.192 Latency(us) 00:22:47.192 [2024-12-16T10:09:45.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.192 [2024-12-16T10:09:45.817Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:47.192 nvme0n1 : 2.00 26617.31 103.97 0.00 0.00 4804.13 1921.40 12690.15 00:22:47.192 [2024-12-16T10:09:45.817Z] =================================================================================================================== 00:22:47.192 [2024-12-16T10:09:45.817Z] Total : 26617.31 103.97 0.00 0.00 4804.13 1921.40 12690.15 00:22:47.192 0 00:22:47.192 10:09:45 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:47.450 10:09:45 -- host/digest.sh@92 -- # get_accel_stats 00:22:47.450 10:09:45 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:47.450 10:09:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:47.450 10:09:45 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:47.450 | select(.opcode=="crc32c") 00:22:47.450 | "\(.module_name) \(.executed)"' 00:22:47.709 10:09:46 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:47.709 10:09:46 -- host/digest.sh@93 -- # exp_module=software 00:22:47.709 10:09:46 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:47.709 10:09:46 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:47.709 10:09:46 -- host/digest.sh@97 -- # killprocess 97462 00:22:47.709 10:09:46 -- common/autotest_common.sh@936 -- # '[' -z 97462 ']' 00:22:47.709 10:09:46 -- common/autotest_common.sh@940 -- # kill -0 97462 00:22:47.709 10:09:46 -- common/autotest_common.sh@941 -- # uname 00:22:47.709 10:09:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:47.709 10:09:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97462 00:22:47.709 10:09:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:47.709 10:09:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:47.709 killing process with pid 97462 00:22:47.709 10:09:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97462' 00:22:47.709 Received shutdown signal, test time was about 2.000000 seconds 00:22:47.709 00:22:47.709 Latency(us) 00:22:47.709 [2024-12-16T10:09:46.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.709 [2024-12-16T10:09:46.334Z] =================================================================================================================== 00:22:47.709 [2024-12-16T10:09:46.334Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.709 10:09:46 -- common/autotest_common.sh@955 -- # kill 97462 00:22:47.709 10:09:46 -- common/autotest_common.sh@960 -- # wait 97462 00:22:47.709 10:09:46 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:47.709 10:09:46 -- host/digest.sh@77 -- # local rw bs qd 00:22:47.709 10:09:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:47.709 10:09:46 -- host/digest.sh@80 -- # rw=randwrite 00:22:47.709 10:09:46 -- host/digest.sh@80 -- # bs=131072 00:22:47.709 10:09:46 -- host/digest.sh@80 -- # qd=16 00:22:47.967 10:09:46 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:47.967 10:09:46 -- host/digest.sh@82 -- # bperfpid=97548 00:22:47.967 10:09:46 -- host/digest.sh@83 -- # waitforlisten 97548 /var/tmp/bperf.sock 00:22:47.967 10:09:46 -- common/autotest_common.sh@829 -- # '[' -z 97548 ']' 00:22:47.967 10:09:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:47.967 10:09:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:47.967 10:09:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:47.967 10:09:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.967 10:09:46 -- common/autotest_common.sh@10 -- # set +x 00:22:47.967 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:47.967 Zero copy mechanism will not be used. 00:22:47.967 [2024-12-16 10:09:46.370964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:47.967 [2024-12-16 10:09:46.371076] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97548 ] 00:22:47.967 [2024-12-16 10:09:46.500430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.967 [2024-12-16 10:09:46.572143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.225 10:09:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.225 10:09:46 -- common/autotest_common.sh@862 -- # return 0 00:22:48.225 10:09:46 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:48.225 10:09:46 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:48.225 10:09:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:48.483 10:09:46 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:48.483 10:09:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:48.742 nvme0n1 00:22:48.742 10:09:47 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:48.742 10:09:47 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:48.742 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:48.742 Zero copy mechanism will not be used. 00:22:48.742 Running I/O for 2 seconds... 00:22:51.294 00:22:51.294 Latency(us) 00:22:51.294 [2024-12-16T10:09:49.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.294 [2024-12-16T10:09:49.919Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:51.294 nvme0n1 : 2.00 8354.31 1044.29 0.00 0.00 1910.68 1511.80 11975.21 00:22:51.294 [2024-12-16T10:09:49.919Z] =================================================================================================================== 00:22:51.294 [2024-12-16T10:09:49.919Z] Total : 8354.31 1044.29 0.00 0.00 1910.68 1511.80 11975.21 00:22:51.294 0 00:22:51.294 10:09:49 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:51.294 10:09:49 -- host/digest.sh@92 -- # get_accel_stats 00:22:51.294 10:09:49 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:51.294 10:09:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:51.294 10:09:49 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:51.294 | select(.opcode=="crc32c") 00:22:51.294 | "\(.module_name) \(.executed)"' 00:22:51.294 10:09:49 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:51.294 10:09:49 -- host/digest.sh@93 -- # exp_module=software 00:22:51.294 10:09:49 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:51.294 10:09:49 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:51.294 10:09:49 -- host/digest.sh@97 -- # killprocess 97548 00:22:51.295 10:09:49 -- common/autotest_common.sh@936 -- # '[' -z 97548 ']' 00:22:51.295 10:09:49 -- common/autotest_common.sh@940 -- # kill -0 97548 00:22:51.295 10:09:49 -- common/autotest_common.sh@941 -- # uname 00:22:51.295 10:09:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:51.295 10:09:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97548 00:22:51.295 10:09:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:51.295 10:09:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:51.295 killing process with pid 97548 00:22:51.295 10:09:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97548' 00:22:51.295 Received shutdown signal, test time was about 2.000000 seconds 00:22:51.295 00:22:51.295 Latency(us) 00:22:51.295 [2024-12-16T10:09:49.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.295 [2024-12-16T10:09:49.920Z] =================================================================================================================== 00:22:51.295 [2024-12-16T10:09:49.920Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:51.295 10:09:49 -- common/autotest_common.sh@955 -- # kill 97548 00:22:51.295 10:09:49 -- common/autotest_common.sh@960 -- # wait 97548 00:22:51.295 10:09:49 -- host/digest.sh@126 -- # killprocess 97263 00:22:51.295 10:09:49 -- common/autotest_common.sh@936 -- # '[' -z 97263 ']' 00:22:51.295 10:09:49 -- common/autotest_common.sh@940 -- # kill -0 97263 00:22:51.295 10:09:49 -- common/autotest_common.sh@941 -- # uname 00:22:51.295 10:09:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:51.295 10:09:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97263 00:22:51.295 10:09:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:51.295 10:09:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:51.295 killing process with pid 97263 00:22:51.295 10:09:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97263' 00:22:51.295 10:09:49 -- common/autotest_common.sh@955 -- # kill 97263 00:22:51.295 10:09:49 -- common/autotest_common.sh@960 -- # wait 97263 00:22:51.554 00:22:51.554 real 0m16.564s 00:22:51.554 user 0m31.552s 00:22:51.554 sys 0m4.601s 00:22:51.554 10:09:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:51.554 10:09:50 -- common/autotest_common.sh@10 -- # set +x 00:22:51.554 ************************************ 00:22:51.554 END TEST nvmf_digest_clean 00:22:51.554 ************************************ 00:22:51.554 10:09:50 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:51.554 10:09:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:51.554 10:09:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:51.554 10:09:50 -- common/autotest_common.sh@10 -- # set +x 00:22:51.554 ************************************ 00:22:51.554 START TEST nvmf_digest_error 00:22:51.554 ************************************ 00:22:51.554 10:09:50 -- common/autotest_common.sh@1114 -- # run_digest_error 00:22:51.554 10:09:50 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:51.554 10:09:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:51.554 10:09:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:51.554 10:09:50 -- common/autotest_common.sh@10 -- # set +x 00:22:51.554 10:09:50 -- nvmf/common.sh@469 -- # nvmfpid=97648 00:22:51.554 10:09:50 -- nvmf/common.sh@470 -- # waitforlisten 97648 00:22:51.554 10:09:50 -- common/autotest_common.sh@829 -- # '[' -z 97648 ']' 00:22:51.554 10:09:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:51.554 10:09:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.554 10:09:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.554 10:09:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.554 10:09:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.554 10:09:50 -- common/autotest_common.sh@10 -- # set +x 00:22:51.554 [2024-12-16 10:09:50.135694] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:51.554 [2024-12-16 10:09:50.135816] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.812 [2024-12-16 10:09:50.276637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.812 [2024-12-16 10:09:50.356629] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:51.812 [2024-12-16 10:09:50.356798] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.812 [2024-12-16 10:09:50.356810] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.812 [2024-12-16 10:09:50.356818] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.812 [2024-12-16 10:09:50.356872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.747 10:09:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.747 10:09:51 -- common/autotest_common.sh@862 -- # return 0 00:22:52.747 10:09:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:52.747 10:09:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:52.747 10:09:51 -- common/autotest_common.sh@10 -- # set +x 00:22:52.747 10:09:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.748 10:09:51 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:52.748 10:09:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.748 10:09:51 -- common/autotest_common.sh@10 -- # set +x 00:22:52.748 [2024-12-16 10:09:51.113305] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:52.748 10:09:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.748 10:09:51 -- host/digest.sh@104 -- # common_target_config 00:22:52.748 10:09:51 -- host/digest.sh@43 -- # rpc_cmd 00:22:52.748 10:09:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.748 10:09:51 -- common/autotest_common.sh@10 -- # set +x 00:22:52.748 null0 00:22:52.748 [2024-12-16 10:09:51.219932] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.748 [2024-12-16 10:09:51.244049] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.748 10:09:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.748 10:09:51 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:52.748 10:09:51 -- host/digest.sh@54 -- # local rw bs qd 00:22:52.748 10:09:51 -- host/digest.sh@56 -- # rw=randread 00:22:52.748 10:09:51 -- host/digest.sh@56 -- # bs=4096 00:22:52.748 10:09:51 -- host/digest.sh@56 -- # qd=128 00:22:52.748 10:09:51 -- host/digest.sh@58 -- # bperfpid=97692 00:22:52.748 10:09:51 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:52.748 10:09:51 -- host/digest.sh@60 -- # waitforlisten 97692 /var/tmp/bperf.sock 00:22:52.748 10:09:51 -- common/autotest_common.sh@829 -- # '[' -z 97692 ']' 00:22:52.748 10:09:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:52.748 10:09:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:52.748 10:09:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:52.748 10:09:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.748 10:09:51 -- common/autotest_common.sh@10 -- # set +x 00:22:52.748 [2024-12-16 10:09:51.293304] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:52.748 [2024-12-16 10:09:51.293417] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97692 ] 00:22:53.006 [2024-12-16 10:09:51.423585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.006 [2024-12-16 10:09:51.506320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.942 10:09:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.942 10:09:52 -- common/autotest_common.sh@862 -- # return 0 00:22:53.942 10:09:52 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:53.942 10:09:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:53.942 10:09:52 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:53.942 10:09:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.942 10:09:52 -- common/autotest_common.sh@10 -- # set +x 00:22:53.942 10:09:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.942 10:09:52 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:53.942 10:09:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:54.200 nvme0n1 00:22:54.200 10:09:52 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:54.200 10:09:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.200 10:09:52 -- common/autotest_common.sh@10 -- # set +x 00:22:54.200 10:09:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.200 10:09:52 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:54.200 10:09:52 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:54.459 Running I/O for 2 seconds... 00:22:54.459 [2024-12-16 10:09:52.926415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.459 [2024-12-16 10:09:52.926480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.459 [2024-12-16 10:09:52.926511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.459 [2024-12-16 10:09:52.940106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.459 [2024-12-16 10:09:52.940162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.459 [2024-12-16 10:09:52.940191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.459 [2024-12-16 10:09:52.953296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.459 [2024-12-16 10:09:52.953379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.459 [2024-12-16 10:09:52.953394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.459 [2024-12-16 10:09:52.966583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.459 [2024-12-16 10:09:52.966638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.459 [2024-12-16 10:09:52.966666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.459 [2024-12-16 10:09:52.979231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.459 [2024-12-16 10:09:52.979286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.459 [2024-12-16 10:09:52.979314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.459 [2024-12-16 10:09:52.989150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.459 [2024-12-16 10:09:52.989222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.459 [2024-12-16 10:09:52.989251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.459 [2024-12-16 10:09:53.001069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.459 [2024-12-16 10:09:53.001124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.459 [2024-12-16 10:09:53.001152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.459 [2024-12-16 10:09:53.013557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.459 [2024-12-16 10:09:53.013611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.459 [2024-12-16 10:09:53.013639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.459 [2024-12-16 10:09:53.026012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.459 [2024-12-16 10:09:53.026090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.459 [2024-12-16 10:09:53.026104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.459 [2024-12-16 10:09:53.040137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.459 [2024-12-16 10:09:53.040194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.459 [2024-12-16 10:09:53.040222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.459 [2024-12-16 10:09:53.052267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.459 [2024-12-16 10:09:53.052324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.459 [2024-12-16 10:09:53.052352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.459 [2024-12-16 10:09:53.063108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.459 [2024-12-16 10:09:53.063163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.459 [2024-12-16 10:09:53.063190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.459 [2024-12-16 10:09:53.072783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.459 [2024-12-16 10:09:53.072838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.459 [2024-12-16 10:09:53.072866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.083273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.083328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.083355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.093067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.093120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.093149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.102982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.103036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.103063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.115152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.115206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.115234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.124834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.124890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.124918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.138440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.138494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.138522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.150969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.151023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.151051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.164528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.164582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.164610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.178310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.178380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.178409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.191098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.191152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.191181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.205107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.205167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.205197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.218253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.218310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.218322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.231189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.231243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.231271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.244018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.244073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.244101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.256768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.256823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.256852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.269232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.269288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.269316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.281927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.281982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.282010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.293934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.293990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.294018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.303886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.303957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.719 [2024-12-16 10:09:53.303985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.719 [2024-12-16 10:09:53.313994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.719 [2024-12-16 10:09:53.314092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.720 [2024-12-16 10:09:53.314105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.720 [2024-12-16 10:09:53.324951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.720 [2024-12-16 10:09:53.325007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.720 [2024-12-16 10:09:53.325036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.720 [2024-12-16 10:09:53.334704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.720 [2024-12-16 10:09:53.334773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.720 [2024-12-16 10:09:53.334801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.978 [2024-12-16 10:09:53.348399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.978 [2024-12-16 10:09:53.348453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.978 [2024-12-16 10:09:53.348481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.978 [2024-12-16 10:09:53.361243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.978 [2024-12-16 10:09:53.361298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.978 [2024-12-16 10:09:53.361326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.374817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.374873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.374902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.387910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.387966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.387994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.400787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.400841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.400869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.413222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.413278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.413307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.426669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.426725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.426752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.435398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.435453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.435481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.448778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.448833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.448861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.461157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.461212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.461239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.474291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.474347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.474373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.486978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.487034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.487062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.499825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.499881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.499909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.513590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.513649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.513661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.526106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.526163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.526176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.538854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.538910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.538938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.551805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.551861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.551890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.564497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.564553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.564581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.577592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.577647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.577675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:54.979 [2024-12-16 10:09:53.590123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:54.979 [2024-12-16 10:09:53.590180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.979 [2024-12-16 10:09:53.590193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.602873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.602928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.602955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.615527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.615582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.615611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.628162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.628216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.628244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.640905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.640961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.640989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.653507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.653561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.653589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.666444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.666500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.666528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.679272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.679328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.679357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.691938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.691992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.692020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.704750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.704806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.704834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.717523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.717577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.717605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.730463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.730518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.730545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.745293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.745348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.745390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.757108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.757162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.757190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.766265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.766321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.766333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.778115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.778171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.778183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.791880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.791937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.791965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.806405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.806447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.806477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.822597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.239 [2024-12-16 10:09:53.822658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.239 [2024-12-16 10:09:53.822703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.239 [2024-12-16 10:09:53.834861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.240 [2024-12-16 10:09:53.834917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.240 [2024-12-16 10:09:53.834945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.240 [2024-12-16 10:09:53.848434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.240 [2024-12-16 10:09:53.848490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.240 [2024-12-16 10:09:53.848518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.240 [2024-12-16 10:09:53.860341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.240 [2024-12-16 10:09:53.860429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.240 [2024-12-16 10:09:53.860459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:53.875056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:53.875114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:53.875143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:53.887963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:53.888021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:53.888049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:53.902550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:53.902609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:53.902637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:53.914394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:53.914499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:53.914528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:53.927699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:53.927757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:53.927785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:53.941796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:53.941853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:53.941880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:53.951915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:53.951971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:53.951999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:53.965094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:53.965152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:53.965180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:53.980493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:53.980569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:53.980597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:53.990188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:53.990229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:53.990257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:54.003149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:54.003207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:54.003235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:54.017789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:54.017848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:54.017877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:54.031091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:54.031146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:54.031174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:54.045672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:54.045729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:54.045759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:54.058812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:54.058870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:54.058898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:54.071948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.499 [2024-12-16 10:09:54.072005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.499 [2024-12-16 10:09:54.072035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.499 [2024-12-16 10:09:54.085142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.500 [2024-12-16 10:09:54.085197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.500 [2024-12-16 10:09:54.085225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.500 [2024-12-16 10:09:54.097223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.500 [2024-12-16 10:09:54.097280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.500 [2024-12-16 10:09:54.097308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.500 [2024-12-16 10:09:54.107245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.500 [2024-12-16 10:09:54.107302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.500 [2024-12-16 10:09:54.107330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.500 [2024-12-16 10:09:54.120325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.500 [2024-12-16 10:09:54.120392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.500 [2024-12-16 10:09:54.120422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.131702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.131743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.131770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.142315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.142380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.142394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.154133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.154174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.154203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.165184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.165240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.165269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.176266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.176321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.176350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.186105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.186141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.186169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.198877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.198931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.198959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.211751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.211806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.211834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.225488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.225529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.225557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.238407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.238461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.238489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.251825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.251880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.251908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.261140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.261197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.261225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.278863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.278920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.278949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.291904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.291958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.291986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.303619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.303674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.303702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.312633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.312687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.312715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.324937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.324993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.325021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.335115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.335170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.335199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.345488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.345542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.345570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.356362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.356433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.356462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.367071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.367127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.367155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:55.759 [2024-12-16 10:09:54.376592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:55.759 [2024-12-16 10:09:54.376647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.759 [2024-12-16 10:09:54.376675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.018 [2024-12-16 10:09:54.391141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.391197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.391225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.405488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.405543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.405571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.418677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.418733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.418761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.431849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.431919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.431947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.444175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.444231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.444260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.455179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.455237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.455265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.468617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.468675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.468704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.480225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.480280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.480309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.492126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.492184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.492212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.500993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.501049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.501077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.514228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.514269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.514297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.528490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.528545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.528574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.542032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.542110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.542139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.555875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.555932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.555960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.567924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.567980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.568009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.578854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.578910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.578939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.590672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.590728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.590756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.599828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.599897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.599926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.611851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.611907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.611935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.622082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.622122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.622150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.019 [2024-12-16 10:09:54.632966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.019 [2024-12-16 10:09:54.633023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.019 [2024-12-16 10:09:54.633051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.278 [2024-12-16 10:09:54.643863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.278 [2024-12-16 10:09:54.643920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.278 [2024-12-16 10:09:54.643949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.278 [2024-12-16 10:09:54.654111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.278 [2024-12-16 10:09:54.654152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.278 [2024-12-16 10:09:54.654180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.278 [2024-12-16 10:09:54.666870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.278 [2024-12-16 10:09:54.666925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.278 [2024-12-16 10:09:54.666953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.278 [2024-12-16 10:09:54.681055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.278 [2024-12-16 10:09:54.681124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.278 [2024-12-16 10:09:54.681153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.278 [2024-12-16 10:09:54.694956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.278 [2024-12-16 10:09:54.695013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.278 [2024-12-16 10:09:54.695042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.278 [2024-12-16 10:09:54.707694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.278 [2024-12-16 10:09:54.707748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.278 [2024-12-16 10:09:54.707777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.278 [2024-12-16 10:09:54.716657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.278 [2024-12-16 10:09:54.716729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.279 [2024-12-16 10:09:54.716757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.279 [2024-12-16 10:09:54.730566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.279 [2024-12-16 10:09:54.730607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.279 [2024-12-16 10:09:54.730635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.279 [2024-12-16 10:09:54.744398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.279 [2024-12-16 10:09:54.744452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.279 [2024-12-16 10:09:54.744479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.279 [2024-12-16 10:09:54.756750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.279 [2024-12-16 10:09:54.756804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.279 [2024-12-16 10:09:54.756832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.279 [2024-12-16 10:09:54.770533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.279 [2024-12-16 10:09:54.770574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.279 [2024-12-16 10:09:54.770602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.279 [2024-12-16 10:09:54.783239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.279 [2024-12-16 10:09:54.783295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.279 [2024-12-16 10:09:54.783323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.279 [2024-12-16 10:09:54.794988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.279 [2024-12-16 10:09:54.795044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.279 [2024-12-16 10:09:54.795072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.279 [2024-12-16 10:09:54.808285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.279 [2024-12-16 10:09:54.808341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.279 [2024-12-16 10:09:54.808381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.279 [2024-12-16 10:09:54.817058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.279 [2024-12-16 10:09:54.817114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.279 [2024-12-16 10:09:54.817141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.279 [2024-12-16 10:09:54.833448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.279 [2024-12-16 10:09:54.833485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.279 [2024-12-16 10:09:54.833514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.279 [2024-12-16 10:09:54.843425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.279 [2024-12-16 10:09:54.843500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.279 [2024-12-16 10:09:54.843515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.279 [2024-12-16 10:09:54.857277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.279 [2024-12-16 10:09:54.857332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.279 [2024-12-16 10:09:54.857360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.279 [2024-12-16 10:09:54.870926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.279 [2024-12-16 10:09:54.870982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.279 [2024-12-16 10:09:54.871011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.279 [2024-12-16 10:09:54.883728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.279 [2024-12-16 10:09:54.883782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.279 [2024-12-16 10:09:54.883810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.279 [2024-12-16 10:09:54.897291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.279 [2024-12-16 10:09:54.897348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.279 [2024-12-16 10:09:54.897388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.537 [2024-12-16 10:09:54.909626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0c8d0) 00:22:56.537 [2024-12-16 10:09:54.909666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.537 [2024-12-16 10:09:54.909694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:56.537 00:22:56.537 Latency(us) 00:22:56.537 [2024-12-16T10:09:55.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.537 [2024-12-16T10:09:55.162Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:56.537 nvme0n1 : 2.01 20561.29 80.32 0.00 0.00 6220.12 2785.28 19660.80 00:22:56.537 [2024-12-16T10:09:55.162Z] =================================================================================================================== 00:22:56.537 [2024-12-16T10:09:55.162Z] Total : 20561.29 80.32 0.00 0.00 6220.12 2785.28 19660.80 00:22:56.538 0 00:22:56.538 10:09:54 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:56.538 10:09:54 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:56.538 10:09:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:56.538 10:09:54 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:56.538 | .driver_specific 00:22:56.538 | .nvme_error 00:22:56.538 | .status_code 00:22:56.538 | .command_transient_transport_error' 00:22:56.795 10:09:55 -- host/digest.sh@71 -- # (( 161 > 0 )) 00:22:56.795 10:09:55 -- host/digest.sh@73 -- # killprocess 97692 00:22:56.795 10:09:55 -- common/autotest_common.sh@936 -- # '[' -z 97692 ']' 00:22:56.795 10:09:55 -- common/autotest_common.sh@940 -- # kill -0 97692 00:22:56.795 10:09:55 -- common/autotest_common.sh@941 -- # uname 00:22:56.795 10:09:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:56.795 10:09:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97692 00:22:56.795 10:09:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:56.795 killing process with pid 97692 00:22:56.795 10:09:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:56.795 10:09:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97692' 00:22:56.795 Received shutdown signal, test time was about 2.000000 seconds 00:22:56.795 00:22:56.795 Latency(us) 00:22:56.795 [2024-12-16T10:09:55.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.795 [2024-12-16T10:09:55.420Z] =================================================================================================================== 00:22:56.795 [2024-12-16T10:09:55.420Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.795 10:09:55 -- common/autotest_common.sh@955 -- # kill 97692 00:22:56.795 10:09:55 -- common/autotest_common.sh@960 -- # wait 97692 00:22:57.053 10:09:55 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:57.053 10:09:55 -- host/digest.sh@54 -- # local rw bs qd 00:22:57.053 10:09:55 -- host/digest.sh@56 -- # rw=randread 00:22:57.053 10:09:55 -- host/digest.sh@56 -- # bs=131072 00:22:57.053 10:09:55 -- host/digest.sh@56 -- # qd=16 00:22:57.053 10:09:55 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:57.053 10:09:55 -- host/digest.sh@58 -- # bperfpid=97782 00:22:57.053 10:09:55 -- host/digest.sh@60 -- # waitforlisten 97782 /var/tmp/bperf.sock 00:22:57.053 10:09:55 -- common/autotest_common.sh@829 -- # '[' -z 97782 ']' 00:22:57.053 10:09:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:57.053 10:09:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:57.053 10:09:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:57.053 10:09:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.053 10:09:55 -- common/autotest_common.sh@10 -- # set +x 00:22:57.053 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:57.053 Zero copy mechanism will not be used. 00:22:57.053 [2024-12-16 10:09:55.494510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:57.053 [2024-12-16 10:09:55.494608] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97782 ] 00:22:57.053 [2024-12-16 10:09:55.625854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.311 [2024-12-16 10:09:55.700135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.878 10:09:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.878 10:09:56 -- common/autotest_common.sh@862 -- # return 0 00:22:57.878 10:09:56 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:57.878 10:09:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:58.137 10:09:56 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:58.137 10:09:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.137 10:09:56 -- common/autotest_common.sh@10 -- # set +x 00:22:58.137 10:09:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.137 10:09:56 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:58.137 10:09:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:58.395 nvme0n1 00:22:58.655 10:09:57 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:58.655 10:09:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.655 10:09:57 -- common/autotest_common.sh@10 -- # set +x 00:22:58.655 10:09:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.655 10:09:57 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:58.655 10:09:57 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:58.655 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:58.655 Zero copy mechanism will not be used. 00:22:58.655 Running I/O for 2 seconds... 00:22:58.655 [2024-12-16 10:09:57.180396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.180478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.180494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.183668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.183736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.183764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.187483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.187536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.187548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.190942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.190998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.191026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.194266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.194305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.194334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.197679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.197715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.197744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.201057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.201093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.201121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.204842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.204877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.204905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.208572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.208610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.208639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.211803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.211839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.211867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.215265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.215503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.215536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.219302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.219342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.219380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.222824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.222862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.222890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.226480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.226534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.226562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.229989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.230024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.230078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.233731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.233767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.233796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.236991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.237027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.237055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.240430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.240469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.240498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.244385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.244433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.655 [2024-12-16 10:09:57.244462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.655 [2024-12-16 10:09:57.247444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.655 [2024-12-16 10:09:57.247482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.656 [2024-12-16 10:09:57.247511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.656 [2024-12-16 10:09:57.250763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.656 [2024-12-16 10:09:57.250802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.656 [2024-12-16 10:09:57.250830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.656 [2024-12-16 10:09:57.254736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.656 [2024-12-16 10:09:57.254774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.656 [2024-12-16 10:09:57.254802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.656 [2024-12-16 10:09:57.258256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.656 [2024-12-16 10:09:57.258297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.656 [2024-12-16 10:09:57.258327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.656 [2024-12-16 10:09:57.261979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.656 [2024-12-16 10:09:57.262014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.656 [2024-12-16 10:09:57.262066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.656 [2024-12-16 10:09:57.265969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.656 [2024-12-16 10:09:57.266005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.656 [2024-12-16 10:09:57.266033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.656 [2024-12-16 10:09:57.269470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.656 [2024-12-16 10:09:57.269506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.656 [2024-12-16 10:09:57.269535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.656 [2024-12-16 10:09:57.272827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.656 [2024-12-16 10:09:57.272862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.656 [2024-12-16 10:09:57.272891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.656 [2024-12-16 10:09:57.276374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.656 [2024-12-16 10:09:57.276418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.656 [2024-12-16 10:09:57.276446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.279619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.279817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.279850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.283367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.283404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.283432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.286828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.286864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.286892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.290406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.290442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.290470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.293842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.293876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.293905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.296496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.296531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.296559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.299919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.300113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.300145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.303817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.303853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.303880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.307611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.307649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.307677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.311361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.311564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.311597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.315463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.315503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.315532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.318521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.318561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.318591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.322190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.322231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.322260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.325915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.325951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.325980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.329106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.329142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.329170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.332542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.332735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.332767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.336484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.336522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.336549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.339754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.339792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.339820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.343107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.343145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.343173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.347245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.347286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.347315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.350991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.351028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.351056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.354626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.354662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.354690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.357897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.917 [2024-12-16 10:09:57.357932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.917 [2024-12-16 10:09:57.357960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.917 [2024-12-16 10:09:57.360961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.360997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.361026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.364486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.364524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.364552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.367527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.367563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.367606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.371205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.371243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.371271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.375150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.375346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.375406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.379402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.379435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.379463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.383346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.383410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.383439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.387396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.387433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.387461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.391560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.391598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.391627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.394880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.394916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.394944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.398485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.398520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.398548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.402307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.402530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.402564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.406679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.406716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.406744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.410125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.410324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.410357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.412551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.412581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.412608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.416396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.416434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.416462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.419933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.419972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.420001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.423289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.423329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.423357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.426675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.426714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.426742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.430093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.430145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.430175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.433708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.433922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.433955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.437863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.438092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.438303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.442033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.442257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.442429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.446880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.446921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.446949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.450509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.450708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.450740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.454148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.454187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.454216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.457289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.918 [2024-12-16 10:09:57.457530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.918 [2024-12-16 10:09:57.457547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.918 [2024-12-16 10:09:57.461057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.461276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.461440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.465849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.466056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.466092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.470297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.470550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.470582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.474621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.474662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.474691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.478518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.478556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.478585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.481661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.481697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.481726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.484882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.484917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.484945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.488731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.488770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.488797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.491943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.491981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.492009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.495331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.495397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.495411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.498767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.498806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.498834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.502194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.502232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.502261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.506715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.506893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.506926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.510811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.510851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.510880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.514833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.514888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.514917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.519074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.519269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.519302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.522492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.522544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.522557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.525775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.525811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.525839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.529855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.529914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.529942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:58.919 [2024-12-16 10:09:57.534304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:58.919 [2024-12-16 10:09:57.534346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.919 [2024-12-16 10:09:57.534400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.538624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.538663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.538706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.542423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.542461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.542473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.546264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.546496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.546530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.551216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.551256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.551284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.554842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.554881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.554909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.558530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.558566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.558594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.562212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.562414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.562447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.565213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.565250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.565278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.569018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.569060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.569090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.572131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.572169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.572197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.575679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.575717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.575746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.578933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.578970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.578998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.582142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.582343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.582362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.585626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.585664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.585692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.589060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.589096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.589124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.593647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.593840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.180 [2024-12-16 10:09:57.593970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.180 [2024-12-16 10:09:57.597312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.180 [2024-12-16 10:09:57.597495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.597529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.600896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.600935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.600963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.604494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.604671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.604705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.607957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.607999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.608028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.612007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.612237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.612483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.616293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.616467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.616499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.619867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.619909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.619939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.624124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.624300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.624333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.627422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.627484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.627498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.631097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.631136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.631165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.635350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.635415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.635430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.638589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.638628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.638657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.642351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.642416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.642444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.645993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.646029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.646083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.649810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.649846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.649875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.653467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.653503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.653531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.657352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.657418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.657447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.660853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.660890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.660918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.664355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.664562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.664596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.668063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.668102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.668130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.671681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.671853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.671885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.675148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.675359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.675535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.679490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.679684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.679835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.683581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.683622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.683651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.687001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.687039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.687067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.690226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.690266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.690294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.693659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.693697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.693725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.181 [2024-12-16 10:09:57.696953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.181 [2024-12-16 10:09:57.697156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.181 [2024-12-16 10:09:57.697188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.700851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.700892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.700920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.704260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.704299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.704328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.707878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.707916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.707944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.711439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.711476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.711504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.715388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.715425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.715454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.719492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.719531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.719560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.722818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.722856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.722884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.726520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.726559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.726587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.730330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.730410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.730424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.734481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.734520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.734549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.738353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.738423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.738439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.742023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.742085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.742115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.746235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.746504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.746637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.749769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.749824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.749853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.753412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.753448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.753476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.756935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.756972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.757001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.759836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.759870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.759898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.763828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.763866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.763895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.767343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.767392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.767421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.770917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.770955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.770983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.774819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.774870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.774898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.778496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.778682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.778714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.782239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.782494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.782526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.786025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.786202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.786235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.789691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.789880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.789912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.793069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.793107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.793136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.796823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.796861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-12-16 10:09:57.796889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.182 [2024-12-16 10:09:57.800563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.182 [2024-12-16 10:09:57.800601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-12-16 10:09:57.800630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.443 [2024-12-16 10:09:57.803948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.443 [2024-12-16 10:09:57.803985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.443 [2024-12-16 10:09:57.804013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.443 [2024-12-16 10:09:57.807021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.443 [2024-12-16 10:09:57.807212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.443 [2024-12-16 10:09:57.807246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.443 [2024-12-16 10:09:57.810832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.443 [2024-12-16 10:09:57.810990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.443 [2024-12-16 10:09:57.811135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.443 [2024-12-16 10:09:57.814762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.443 [2024-12-16 10:09:57.814968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.443 [2024-12-16 10:09:57.815124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.443 [2024-12-16 10:09:57.818680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.443 [2024-12-16 10:09:57.818912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.443 [2024-12-16 10:09:57.819095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.443 [2024-12-16 10:09:57.822687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.443 [2024-12-16 10:09:57.822876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.443 [2024-12-16 10:09:57.823049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.443 [2024-12-16 10:09:57.827045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.827263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.827523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.831408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.831610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.831751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.835523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.835742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.835896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.839425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.839594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.839627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.843719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.843773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.843802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.847143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.847181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.847208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.850673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.850711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.850739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.854238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.854445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.854479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.857862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.857898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.857927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.861413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.861448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.861477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.865232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.865268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.865296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.868779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.868814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.868842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.872319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.872379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.872392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.875857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.875894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.875922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.879122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.879157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.879185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.882903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.882939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.882967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.886740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.886776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.886805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.890804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.891014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.891048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.894945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.894983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.895012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.898745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.898988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.899102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.902820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.902857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.902885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.906542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.906585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.906599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.910118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.910308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.910326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.914479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.914520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.914550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.918000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.918036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.918099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.922356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.922574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.922608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.927196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.927459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.927723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.444 [2024-12-16 10:09:57.931148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.444 [2024-12-16 10:09:57.931338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.444 [2024-12-16 10:09:57.931585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.935256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.935295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.935324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.938615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.938653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.938682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.941960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.941996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.942024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.946079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.946118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.946146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.949554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.949590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.949618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.952515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.952552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.952581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.955994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.956033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.956061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.959234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.959271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.959299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.962577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.962614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.962642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.966425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.966462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.966490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.969728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.969916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.969948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.973555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.973591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.973620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.976639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.976676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.976704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.980018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.980056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.980084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.983944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.984156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.984276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.988337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.988559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.988754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.992673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.992891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.993025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:57.996412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:57.996592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:57.996732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:58.000121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:58.000327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:58.000344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:58.003818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:58.003856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:58.003884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:58.007342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:58.007547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:58.007581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:58.010475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:58.010508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:58.010535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:58.013933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:58.013969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:58.013997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:58.018037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:58.018099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:58.018128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:58.021660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:58.021695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:58.021723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:58.025571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:58.025608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:58.025636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:58.028968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:58.029004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.445 [2024-12-16 10:09:58.029032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.445 [2024-12-16 10:09:58.032982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.445 [2024-12-16 10:09:58.033176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.446 [2024-12-16 10:09:58.033209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.446 [2024-12-16 10:09:58.036813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.446 [2024-12-16 10:09:58.036850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.446 [2024-12-16 10:09:58.036878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.446 [2024-12-16 10:09:58.040792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.446 [2024-12-16 10:09:58.040829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.446 [2024-12-16 10:09:58.040857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.446 [2024-12-16 10:09:58.043885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.446 [2024-12-16 10:09:58.043923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.446 [2024-12-16 10:09:58.043952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.446 [2024-12-16 10:09:58.047397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.446 [2024-12-16 10:09:58.047433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.446 [2024-12-16 10:09:58.047462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.446 [2024-12-16 10:09:58.051008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.446 [2024-12-16 10:09:58.051218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.446 [2024-12-16 10:09:58.051252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.446 [2024-12-16 10:09:58.054689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.446 [2024-12-16 10:09:58.054742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.446 [2024-12-16 10:09:58.054771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.446 [2024-12-16 10:09:58.058408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.446 [2024-12-16 10:09:58.058463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.446 [2024-12-16 10:09:58.058491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.446 [2024-12-16 10:09:58.061914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.446 [2024-12-16 10:09:58.061952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.446 [2024-12-16 10:09:58.061980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.707 [2024-12-16 10:09:58.065525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.707 [2024-12-16 10:09:58.065562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.707 [2024-12-16 10:09:58.065590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.707 [2024-12-16 10:09:58.068780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.707 [2024-12-16 10:09:58.068815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.707 [2024-12-16 10:09:58.068843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.707 [2024-12-16 10:09:58.072302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.707 [2024-12-16 10:09:58.072340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.707 [2024-12-16 10:09:58.072380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.707 [2024-12-16 10:09:58.075842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.707 [2024-12-16 10:09:58.075880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.707 [2024-12-16 10:09:58.075909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.707 [2024-12-16 10:09:58.079191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.707 [2024-12-16 10:09:58.079393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.707 [2024-12-16 10:09:58.079426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.707 [2024-12-16 10:09:58.082758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.707 [2024-12-16 10:09:58.082815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.707 [2024-12-16 10:09:58.082845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.707 [2024-12-16 10:09:58.086553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.707 [2024-12-16 10:09:58.086590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.707 [2024-12-16 10:09:58.086619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.707 [2024-12-16 10:09:58.089731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.707 [2024-12-16 10:09:58.089767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.707 [2024-12-16 10:09:58.089795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.707 [2024-12-16 10:09:58.093454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.707 [2024-12-16 10:09:58.093490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.707 [2024-12-16 10:09:58.093518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.707 [2024-12-16 10:09:58.096984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.707 [2024-12-16 10:09:58.097021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.707 [2024-12-16 10:09:58.097049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.707 [2024-12-16 10:09:58.100577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.707 [2024-12-16 10:09:58.100613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.707 [2024-12-16 10:09:58.100642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.707 [2024-12-16 10:09:58.103754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.707 [2024-12-16 10:09:58.103807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.707 [2024-12-16 10:09:58.103835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.707 [2024-12-16 10:09:58.107567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.707 [2024-12-16 10:09:58.107604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.707 [2024-12-16 10:09:58.107632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.707 [2024-12-16 10:09:58.111031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.111228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.111260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.115022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.115061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.115089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.118643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.118844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.118880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.122315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.122612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.122760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.126359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.126573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.126730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.130922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.131094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.131127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.134705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.134761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.134790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.138447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.138484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.138512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.141646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.141804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.141837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.145246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.145284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.145312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.148869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.148907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.148951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.152467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.152505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.152535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.155714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.155752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.155781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.159491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.159530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.159558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.163168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.163393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.163411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.166792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.166834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.166864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.170090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.170128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.170158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.173496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.173532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.173560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.177123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.177162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.177190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.180827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.180865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.180894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.184267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.184493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.184510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.188203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.188420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.188454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.192733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.192788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.192816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.195836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.195876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.195905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.199664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.199702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.199730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.203218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.203257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.203285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.206669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.206708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.708 [2024-12-16 10:09:58.206751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.708 [2024-12-16 10:09:58.209868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.708 [2024-12-16 10:09:58.210081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.210114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.213761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.213948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.213981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.217719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.217802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.217817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.221413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.221452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.221480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.224928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.224966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.224994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.228537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.228576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.228604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.232371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.232439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.232469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.235715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.235769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.235796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.238512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.238549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.238577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.242193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.242233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.242261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.245703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.245741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.245770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.249462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.249497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.249525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.253314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.253517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.253550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.257334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.257548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.257582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.261010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.261196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.261229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.265064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.265102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.265130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.268572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.268611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.268640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.271544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.271581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.271610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.274959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.274996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.275024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.278719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.278757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.278801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.282401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.282453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.282481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.285515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.285549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.285576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.288845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.288882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.288910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.292555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.292592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.292621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.295766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.295802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.295830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.298689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.298725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.298753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.302083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.302120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.302132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.305865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.709 [2024-12-16 10:09:58.306090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.709 [2024-12-16 10:09:58.306107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.709 [2024-12-16 10:09:58.309543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.710 [2024-12-16 10:09:58.309579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.710 [2024-12-16 10:09:58.309607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.710 [2024-12-16 10:09:58.313041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.710 [2024-12-16 10:09:58.313078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.710 [2024-12-16 10:09:58.313107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.710 [2024-12-16 10:09:58.316296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.710 [2024-12-16 10:09:58.316333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.710 [2024-12-16 10:09:58.316377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.710 [2024-12-16 10:09:58.319895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.710 [2024-12-16 10:09:58.319933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.710 [2024-12-16 10:09:58.319961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.710 [2024-12-16 10:09:58.323877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.710 [2024-12-16 10:09:58.323915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.710 [2024-12-16 10:09:58.323943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.710 [2024-12-16 10:09:58.327439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.710 [2024-12-16 10:09:58.327476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.710 [2024-12-16 10:09:58.327504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.330824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.331021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.331055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.334461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.334497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.334525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.338304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.338346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.338402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.341555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.341590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.341618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.344769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.344805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.344833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.347766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.347802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.347830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.351384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.351420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.351448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.354750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.354959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.354976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.358594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.358786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.358818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.362315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.362581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.362616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.365898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.366093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.366126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.370134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.370174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.370188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.373393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.373440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.373468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.376585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.376623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.376652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.379866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.379902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.379930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.383677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.383713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.383740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.387024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.387061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.387089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.390523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.390558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.390586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.393546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.393580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.393608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.396917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.396952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.396979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.400441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.400476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.400504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.403768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.403805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.403833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.407682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.407719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.407747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.410686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.410724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.410752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.414464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.971 [2024-12-16 10:09:58.414502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.971 [2024-12-16 10:09:58.414530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.971 [2024-12-16 10:09:58.417580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.417615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.417643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.421146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.421348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.421394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.424827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.424865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.424893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.428688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.428727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.428772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.432567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.432605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.432634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.435721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.435758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.435786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.438853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.439063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.439080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.441948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.441983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.442011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.445964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.446174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.446191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.450139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.450178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.450190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.453786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.453993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.454010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.457287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.457324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.457352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.460466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.460502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.460530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.464047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.464086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.464114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.467514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.467550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.467578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.470631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.470667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.470695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.474566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.474602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.474631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.478211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.478438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.478470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.481700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.481736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.481764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.484902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.484936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.484965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.488224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.488261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.488288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.491731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.491768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.491796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.494982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.495019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.495047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.498721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.498928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.498945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.502515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.502553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.502582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.505781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.505817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.505845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.509261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.509295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.972 [2024-12-16 10:09:58.509324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.972 [2024-12-16 10:09:58.513111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.972 [2024-12-16 10:09:58.513150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.513178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.516707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.516747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.516791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.520838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.520877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.520905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.524247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.524284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.524312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.527719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.527757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.527785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.531075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.531114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.531142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.534910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.534948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.534976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.537696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.537730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.537759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.541188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.541223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.541251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.544659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.544694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.544723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.548611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.548647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.548674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.552126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.552351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.552395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.556273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.556477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.556525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.559739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.559773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.559801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.563590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.563629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.563657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.567239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.567277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.567306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.571134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.571325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.571358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.575137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.575176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.575203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.578746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.578783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.578811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.582113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.582149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.582178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.585968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.586134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.586168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.973 [2024-12-16 10:09:58.590095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:22:59.973 [2024-12-16 10:09:58.590133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.973 [2024-12-16 10:09:58.590161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.248 [2024-12-16 10:09:58.594141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.248 [2024-12-16 10:09:58.594179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.248 [2024-12-16 10:09:58.594208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.248 [2024-12-16 10:09:58.597640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.248 [2024-12-16 10:09:58.597675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.248 [2024-12-16 10:09:58.597703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.248 [2024-12-16 10:09:58.601692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.248 [2024-12-16 10:09:58.601728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.601755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.605003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.605040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.605068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.608842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.608878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.608906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.612646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.612686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.612714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.616549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.616588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.616617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.620077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.620286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.620319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.622824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.622860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.622887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.626631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.626669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.626697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.629822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.629857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.629886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.633647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.633683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.633711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.636823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.637008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.637041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.640569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.640604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.640632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.644266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.644305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.644334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.647543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.647586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.647614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.650707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.650744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.650772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.653680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.653743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.653771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.657266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.657460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.657494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.660606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.660655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.660683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.664165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.664204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.664233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.668436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.668484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.249 [2024-12-16 10:09:58.668527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.249 [2024-12-16 10:09:58.671599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.249 [2024-12-16 10:09:58.671636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.671664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.675174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.675213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.675242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.678768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.678805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.678833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.682217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.682430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.682479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.686717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.686757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.686801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.689974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.690168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.690201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.693693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.693729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.693757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.696894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.697078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.697110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.700263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.700297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.700325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.704280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.704319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.704347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.707774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.707813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.707841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.711282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.711319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.711347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.714653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.714690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.714718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.717707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.717743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.717771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.721317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.721381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.721396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.724894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.724930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.724958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.728255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.728292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.728320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.731494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.731532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.731559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.735032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.735239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.735256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.739087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.739292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.250 [2024-12-16 10:09:58.739309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.250 [2024-12-16 10:09:58.743265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.250 [2024-12-16 10:09:58.743305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.743333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.747325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.747404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.747419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.751431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.751496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.751510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.755297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.755337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.755392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.759202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.759242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.759270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.763188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.763228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.763257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.766913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.766950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.766978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.771046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.771085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.771114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.774434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.774472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.774501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.778295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.778336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.778390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.781922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.781959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.781987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.785375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.785423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.785453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.789228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.789266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.789294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.793519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.793560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.793588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.796786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.796823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.796851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.800388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.800423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.800451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.803779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.803817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.803846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.806766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.806805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.806832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.810250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.810293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.810323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.813943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.813980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.251 [2024-12-16 10:09:58.814009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.251 [2024-12-16 10:09:58.817360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.251 [2024-12-16 10:09:58.817421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.252 [2024-12-16 10:09:58.817432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.252 [2024-12-16 10:09:58.820674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.252 [2024-12-16 10:09:58.820712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.252 [2024-12-16 10:09:58.820741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.252 [2024-12-16 10:09:58.824748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.252 [2024-12-16 10:09:58.824801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.252 [2024-12-16 10:09:58.824829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.252 [2024-12-16 10:09:58.827899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.252 [2024-12-16 10:09:58.827935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.252 [2024-12-16 10:09:58.827964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.252 [2024-12-16 10:09:58.831483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.252 [2024-12-16 10:09:58.831673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.252 [2024-12-16 10:09:58.831721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.252 [2024-12-16 10:09:58.835411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.252 [2024-12-16 10:09:58.835449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.252 [2024-12-16 10:09:58.835478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.252 [2024-12-16 10:09:58.838638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.252 [2024-12-16 10:09:58.838677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.252 [2024-12-16 10:09:58.838705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.252 [2024-12-16 10:09:58.842272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.252 [2024-12-16 10:09:58.842313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.252 [2024-12-16 10:09:58.842341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.252 [2024-12-16 10:09:58.845308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.252 [2024-12-16 10:09:58.845345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.252 [2024-12-16 10:09:58.845383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.252 [2024-12-16 10:09:58.848832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.252 [2024-12-16 10:09:58.848872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.252 [2024-12-16 10:09:58.848900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.252 [2024-12-16 10:09:58.852987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.252 [2024-12-16 10:09:58.853029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.252 [2024-12-16 10:09:58.853057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.252 [2024-12-16 10:09:58.856692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.252 [2024-12-16 10:09:58.856731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.252 [2024-12-16 10:09:58.856775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.252 [2024-12-16 10:09:58.859849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.252 [2024-12-16 10:09:58.859888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.252 [2024-12-16 10:09:58.859916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.863617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.539 [2024-12-16 10:09:58.863655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.539 [2024-12-16 10:09:58.863683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.867035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.539 [2024-12-16 10:09:58.867074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.539 [2024-12-16 10:09:58.867103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.870850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.539 [2024-12-16 10:09:58.870889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.539 [2024-12-16 10:09:58.870918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.873866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.539 [2024-12-16 10:09:58.873905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.539 [2024-12-16 10:09:58.873934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.877549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.539 [2024-12-16 10:09:58.877586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.539 [2024-12-16 10:09:58.877614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.881080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.539 [2024-12-16 10:09:58.881116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.539 [2024-12-16 10:09:58.881145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.884552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.539 [2024-12-16 10:09:58.884589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.539 [2024-12-16 10:09:58.884617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.888303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.539 [2024-12-16 10:09:58.888509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.539 [2024-12-16 10:09:58.888542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.891968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.539 [2024-12-16 10:09:58.892003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.539 [2024-12-16 10:09:58.892032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.895799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.539 [2024-12-16 10:09:58.895987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.539 [2024-12-16 10:09:58.896004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.899848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.539 [2024-12-16 10:09:58.900037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.539 [2024-12-16 10:09:58.900054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.903593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.539 [2024-12-16 10:09:58.903627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.539 [2024-12-16 10:09:58.903655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.906650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.539 [2024-12-16 10:09:58.906704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.539 [2024-12-16 10:09:58.906732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.909884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.539 [2024-12-16 10:09:58.909922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.539 [2024-12-16 10:09:58.909950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.913408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.539 [2024-12-16 10:09:58.913475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.539 [2024-12-16 10:09:58.913491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.539 [2024-12-16 10:09:58.917462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.917502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.917532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.921121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.921159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.921188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.924974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.925008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.925036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.928558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.928598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.928612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.932938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.933190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.933228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.936944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.936985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.937014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.941119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.941295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.941327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.944958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.944995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.945024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.948520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.948559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.948588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.952235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.952274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.952303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.956165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.956207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.956236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.959294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.959333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.959361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.962829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.962868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.962896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.966255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.966444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.966461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.969715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.969753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.969783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.973232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.973268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.973296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.977330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.977536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.977570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.981209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.981436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.981454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.984864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.984906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.984935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.988628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.988669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.988698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.992070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.992108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.992136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.995720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.995757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.995787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:58.998857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:58.999048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:58.999079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:59.002184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:59.002219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:59.002248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:59.005725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:59.005943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:59.006185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:59.009624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:59.009816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:59.009965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:59.013582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:59.013760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.540 [2024-12-16 10:09:59.013931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.540 [2024-12-16 10:09:59.017185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.540 [2024-12-16 10:09:59.017380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.017697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.021356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.021565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.021709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.025015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.025209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.025350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.028798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.028991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.029132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.033170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.033390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.033498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.036720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.036758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.036786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.040411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.040449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.040476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.043706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.043744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.043773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.046817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.046854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.046883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.050180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.050437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.050471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.053765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.053802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.053830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.057495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.057546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.057574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.060643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.060678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.060706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.064095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.064132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.064161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.067342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.067404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.067433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.070663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.070701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.070745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.073961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.074167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.074185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.077514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.077696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.077728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.081478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.081520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.081549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.084976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.085015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.085043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.088681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.088719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.088747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.092681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.092719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.092748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.095927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.095965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.095994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.099770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.099823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.099851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.103885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.103923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.103951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.107055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.107092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.107120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.110932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.111139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.541 [2024-12-16 10:09:59.111156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.541 [2024-12-16 10:09:59.114912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.541 [2024-12-16 10:09:59.114951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.542 [2024-12-16 10:09:59.114980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.542 [2024-12-16 10:09:59.118262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.542 [2024-12-16 10:09:59.118301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.542 [2024-12-16 10:09:59.118330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.542 [2024-12-16 10:09:59.121674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.542 [2024-12-16 10:09:59.121711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.542 [2024-12-16 10:09:59.121740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.542 [2024-12-16 10:09:59.125298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.542 [2024-12-16 10:09:59.125512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.542 [2024-12-16 10:09:59.125545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.542 [2024-12-16 10:09:59.129096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.542 [2024-12-16 10:09:59.129134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.542 [2024-12-16 10:09:59.129162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.542 [2024-12-16 10:09:59.132364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.542 [2024-12-16 10:09:59.132424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.542 [2024-12-16 10:09:59.132438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.542 [2024-12-16 10:09:59.136123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.542 [2024-12-16 10:09:59.136162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.542 [2024-12-16 10:09:59.136190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.542 [2024-12-16 10:09:59.139586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.542 [2024-12-16 10:09:59.139625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.542 [2024-12-16 10:09:59.139653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.542 [2024-12-16 10:09:59.142784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.542 [2024-12-16 10:09:59.142822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.542 [2024-12-16 10:09:59.142850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.542 [2024-12-16 10:09:59.145931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.542 [2024-12-16 10:09:59.145966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.542 [2024-12-16 10:09:59.145994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.542 [2024-12-16 10:09:59.149756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.542 [2024-12-16 10:09:59.149791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.542 [2024-12-16 10:09:59.149819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.542 [2024-12-16 10:09:59.153447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.542 [2024-12-16 10:09:59.153483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.542 [2024-12-16 10:09:59.153512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.542 [2024-12-16 10:09:59.157273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.542 [2024-12-16 10:09:59.157313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.542 [2024-12-16 10:09:59.157341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.801 [2024-12-16 10:09:59.161298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.801 [2024-12-16 10:09:59.161337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.801 [2024-12-16 10:09:59.161380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.801 [2024-12-16 10:09:59.164531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.801 [2024-12-16 10:09:59.164568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.801 [2024-12-16 10:09:59.164596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.801 [2024-12-16 10:09:59.167824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23efd10) 00:23:00.801 [2024-12-16 10:09:59.167862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.801 [2024-12-16 10:09:59.167889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.801 00:23:00.801 Latency(us) 00:23:00.801 [2024-12-16T10:09:59.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.801 [2024-12-16T10:09:59.426Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:00.801 nvme0n1 : 2.00 8496.42 1062.05 0.00 0.00 1880.01 521.31 11200.70 00:23:00.801 [2024-12-16T10:09:59.426Z] =================================================================================================================== 00:23:00.801 [2024-12-16T10:09:59.426Z] Total : 8496.42 1062.05 0.00 0.00 1880.01 521.31 11200.70 00:23:00.801 0 00:23:00.801 10:09:59 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:00.801 10:09:59 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:00.801 10:09:59 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:00.801 | .driver_specific 00:23:00.801 | .nvme_error 00:23:00.801 | .status_code 00:23:00.801 | .command_transient_transport_error' 00:23:00.801 10:09:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:01.059 10:09:59 -- host/digest.sh@71 -- # (( 548 > 0 )) 00:23:01.059 10:09:59 -- host/digest.sh@73 -- # killprocess 97782 00:23:01.059 10:09:59 -- common/autotest_common.sh@936 -- # '[' -z 97782 ']' 00:23:01.059 10:09:59 -- common/autotest_common.sh@940 -- # kill -0 97782 00:23:01.059 10:09:59 -- common/autotest_common.sh@941 -- # uname 00:23:01.059 10:09:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:01.059 10:09:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97782 00:23:01.059 killing process with pid 97782 00:23:01.059 Received shutdown signal, test time was about 2.000000 seconds 00:23:01.059 00:23:01.059 Latency(us) 00:23:01.059 [2024-12-16T10:09:59.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.059 [2024-12-16T10:09:59.684Z] =================================================================================================================== 00:23:01.059 [2024-12-16T10:09:59.684Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.059 10:09:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:01.059 10:09:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:01.059 10:09:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97782' 00:23:01.059 10:09:59 -- common/autotest_common.sh@955 -- # kill 97782 00:23:01.059 10:09:59 -- common/autotest_common.sh@960 -- # wait 97782 00:23:01.318 10:09:59 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:23:01.318 10:09:59 -- host/digest.sh@54 -- # local rw bs qd 00:23:01.318 10:09:59 -- host/digest.sh@56 -- # rw=randwrite 00:23:01.318 10:09:59 -- host/digest.sh@56 -- # bs=4096 00:23:01.318 10:09:59 -- host/digest.sh@56 -- # qd=128 00:23:01.318 10:09:59 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:01.318 10:09:59 -- host/digest.sh@58 -- # bperfpid=97867 00:23:01.318 10:09:59 -- host/digest.sh@60 -- # waitforlisten 97867 /var/tmp/bperf.sock 00:23:01.318 10:09:59 -- common/autotest_common.sh@829 -- # '[' -z 97867 ']' 00:23:01.318 10:09:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:01.318 10:09:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.318 10:09:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:01.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:01.318 10:09:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.318 10:09:59 -- common/autotest_common.sh@10 -- # set +x 00:23:01.318 [2024-12-16 10:09:59.752513] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:01.318 [2024-12-16 10:09:59.752790] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97867 ] 00:23:01.318 [2024-12-16 10:09:59.883671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.577 [2024-12-16 10:09:59.950778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.144 10:10:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:02.144 10:10:00 -- common/autotest_common.sh@862 -- # return 0 00:23:02.144 10:10:00 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:02.144 10:10:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:02.403 10:10:00 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:02.403 10:10:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.403 10:10:00 -- common/autotest_common.sh@10 -- # set +x 00:23:02.403 10:10:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.403 10:10:00 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:02.403 10:10:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:02.970 nvme0n1 00:23:02.970 10:10:01 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:02.970 10:10:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.970 10:10:01 -- common/autotest_common.sh@10 -- # set +x 00:23:02.970 10:10:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.970 10:10:01 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:02.970 10:10:01 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:02.970 Running I/O for 2 seconds... 00:23:02.970 [2024-12-16 10:10:01.424041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f6890 00:23:02.970 [2024-12-16 10:10:01.424538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.970 [2024-12-16 10:10:01.424576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:02.970 [2024-12-16 10:10:01.435689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fcdd0 00:23:02.970 [2024-12-16 10:10:01.436572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.970 [2024-12-16 10:10:01.436626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:02.970 [2024-12-16 10:10:01.445147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fdeb0 00:23:02.970 [2024-12-16 10:10:01.445763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.970 [2024-12-16 10:10:01.445858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:02.970 [2024-12-16 10:10:01.453791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e2c28 00:23:02.970 [2024-12-16 10:10:01.454095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.970 [2024-12-16 10:10:01.454137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:02.970 [2024-12-16 10:10:01.465527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ebb98 00:23:02.970 [2024-12-16 10:10:01.466515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.970 [2024-12-16 10:10:01.466550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:02.970 [2024-12-16 10:10:01.474060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f6890 00:23:02.970 [2024-12-16 10:10:01.475128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.970 [2024-12-16 10:10:01.475177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:02.970 [2024-12-16 10:10:01.483871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e12d8 00:23:02.970 [2024-12-16 10:10:01.484387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.970 [2024-12-16 10:10:01.484439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:02.970 [2024-12-16 10:10:01.494621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f3e60 00:23:02.970 [2024-12-16 10:10:01.495653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.970 [2024-12-16 10:10:01.495686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:02.970 [2024-12-16 10:10:01.503522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f2948 00:23:02.970 [2024-12-16 10:10:01.504897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.970 [2024-12-16 10:10:01.504945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.970 [2024-12-16 10:10:01.512923] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f35f0 00:23:02.970 [2024-12-16 10:10:01.513507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.970 [2024-12-16 10:10:01.513586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.970 [2024-12-16 10:10:01.522267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ed4e8 00:23:02.970 [2024-12-16 10:10:01.522835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.970 [2024-12-16 10:10:01.522871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:02.970 [2024-12-16 10:10:01.532027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f4b08 00:23:02.970 [2024-12-16 10:10:01.533395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.970 [2024-12-16 10:10:01.533435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.970 [2024-12-16 10:10:01.542282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e1f80 00:23:02.970 [2024-12-16 10:10:01.543872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.970 [2024-12-16 10:10:01.543922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.971 [2024-12-16 10:10:01.550598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f3a28 00:23:02.971 [2024-12-16 10:10:01.551733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.971 [2024-12-16 10:10:01.551781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.971 [2024-12-16 10:10:01.559966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f6cc8 00:23:02.971 [2024-12-16 10:10:01.561668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.971 [2024-12-16 10:10:01.561705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.971 [2024-12-16 10:10:01.568497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e9168 00:23:02.971 [2024-12-16 10:10:01.569653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.971 [2024-12-16 10:10:01.569687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:02.971 [2024-12-16 10:10:01.578753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ff3c8 00:23:02.971 [2024-12-16 10:10:01.579277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.971 [2024-12-16 10:10:01.579312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:02.971 [2024-12-16 10:10:01.590232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fe720 00:23:02.971 [2024-12-16 10:10:01.591392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.971 [2024-12-16 10:10:01.591450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:03.230 [2024-12-16 10:10:01.598472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f6020 00:23:03.230 [2024-12-16 10:10:01.599722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.230 [2024-12-16 10:10:01.599769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.230 [2024-12-16 10:10:01.607914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ecc78 00:23:03.230 [2024-12-16 10:10:01.608678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.230 [2024-12-16 10:10:01.608711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:03.230 [2024-12-16 10:10:01.618685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ef6a8 00:23:03.230 [2024-12-16 10:10:01.619865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.230 [2024-12-16 10:10:01.619912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:03.230 [2024-12-16 10:10:01.625704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f81e0 00:23:03.230 [2024-12-16 10:10:01.626029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.230 [2024-12-16 10:10:01.626070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:03.230 [2024-12-16 10:10:01.637064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190de8a8 00:23:03.230 [2024-12-16 10:10:01.638125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.230 [2024-12-16 10:10:01.638158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:03.230 [2024-12-16 10:10:01.646015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190de038 00:23:03.230 [2024-12-16 10:10:01.647395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.230 [2024-12-16 10:10:01.647435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:03.230 [2024-12-16 10:10:01.655901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f5be8 00:23:03.230 [2024-12-16 10:10:01.656633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.230 [2024-12-16 10:10:01.656681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:03.230 [2024-12-16 10:10:01.666239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e12d8 00:23:03.230 [2024-12-16 10:10:01.667131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.230 [2024-12-16 10:10:01.667177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.674519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190df550 00:23:03.231 [2024-12-16 10:10:01.675768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.675816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.684062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190df550 00:23:03.231 [2024-12-16 10:10:01.685253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.685302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.693499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190df550 00:23:03.231 [2024-12-16 10:10:01.694746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.694797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.702858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190df550 00:23:03.231 [2024-12-16 10:10:01.704187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.704237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.712034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190dfdc0 00:23:03.231 [2024-12-16 10:10:01.713214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.713263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.722529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e5a90 00:23:03.231 [2024-12-16 10:10:01.722824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.722859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.732067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190df988 00:23:03.231 [2024-12-16 10:10:01.733180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.733229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.742090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e0ea0 00:23:03.231 [2024-12-16 10:10:01.742391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.742444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.752000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190dfdc0 00:23:03.231 [2024-12-16 10:10:01.752307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.752364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.761884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e3498 00:23:03.231 [2024-12-16 10:10:01.762701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.762752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.771527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e23b8 00:23:03.231 [2024-12-16 10:10:01.771816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.771860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.782094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e1f80 00:23:03.231 [2024-12-16 10:10:01.783558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.783607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.791910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190dfdc0 00:23:03.231 [2024-12-16 10:10:01.793316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.793390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.801658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190df550 00:23:03.231 [2024-12-16 10:10:01.803058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.803110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.811444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e8088 00:23:03.231 [2024-12-16 10:10:01.812661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.812695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.821124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f46d0 00:23:03.231 [2024-12-16 10:10:01.821867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.821916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.831002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f35f0 00:23:03.231 [2024-12-16 10:10:01.831948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.831997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.842168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f35f0 00:23:03.231 [2024-12-16 10:10:01.843143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.843190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:03.231 [2024-12-16 10:10:01.851621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e4de8 00:23:03.231 [2024-12-16 10:10:01.852916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.231 [2024-12-16 10:10:01.852965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:01.861798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fe2e8 00:23:03.490 [2024-12-16 10:10:01.862459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:01.862540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:01.871900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e8d30 00:23:03.490 [2024-12-16 10:10:01.872588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:01.872624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:01.881343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e4140 00:23:03.490 [2024-12-16 10:10:01.882748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:01.882798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:01.890261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f0350 00:23:03.490 [2024-12-16 10:10:01.891266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:01.891313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:01.899645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f3a28 00:23:03.490 [2024-12-16 10:10:01.900648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:01.900694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:01.908959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fdeb0 00:23:03.490 [2024-12-16 10:10:01.909746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:01.909811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:01.920048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fe720 00:23:03.490 [2024-12-16 10:10:01.920841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:01.920905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:01.929567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e4578 00:23:03.490 [2024-12-16 10:10:01.930392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:01.930457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:01.938573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ebfd0 00:23:03.490 [2024-12-16 10:10:01.939690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:01.939738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:01.947420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f6cc8 00:23:03.490 [2024-12-16 10:10:01.948403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:01.948462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:01.958617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f1ca0 00:23:03.490 [2024-12-16 10:10:01.959588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:01.959635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:01.967846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e6b70 00:23:03.490 [2024-12-16 10:10:01.969281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:01.969330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:01.978954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f1868 00:23:03.490 [2024-12-16 10:10:01.979738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:01.979802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:01.990412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e73e0 00:23:03.490 [2024-12-16 10:10:01.991261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:01.991337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:01.999967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190eff18 00:23:03.490 [2024-12-16 10:10:02.000356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:02.000415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:02.011948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e0ea0 00:23:03.490 [2024-12-16 10:10:02.012865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:02.012911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:02.021267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e1710 00:23:03.490 [2024-12-16 10:10:02.023060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:02.023110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:02.029844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f81e0 00:23:03.490 [2024-12-16 10:10:02.031145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:02.031195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:02.039500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e73e0 00:23:03.490 [2024-12-16 10:10:02.039921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:02.039955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:02.048893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e5220 00:23:03.490 [2024-12-16 10:10:02.049609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:02.049677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:02.057996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f96f8 00:23:03.490 [2024-12-16 10:10:02.059177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:02.059224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:02.067757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fd208 00:23:03.490 [2024-12-16 10:10:02.068198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:02.068233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:02.077240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f4298 00:23:03.490 [2024-12-16 10:10:02.077760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:02.077798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:02.086816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fc560 00:23:03.490 [2024-12-16 10:10:02.087937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:02.087986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:02.096574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f1ca0 00:23:03.490 [2024-12-16 10:10:02.097316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.490 [2024-12-16 10:10:02.097384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:03.490 [2024-12-16 10:10:02.105951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ec408 00:23:03.491 [2024-12-16 10:10:02.106716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.491 [2024-12-16 10:10:02.106764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.115327] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e88f8 00:23:03.750 [2024-12-16 10:10:02.116110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.116152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.124674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fb480 00:23:03.750 [2024-12-16 10:10:02.125433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.125509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.134171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f35f0 00:23:03.750 [2024-12-16 10:10:02.134908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.134960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.143686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fdeb0 00:23:03.750 [2024-12-16 10:10:02.144441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.144486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.153285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ebfd0 00:23:03.750 [2024-12-16 10:10:02.154194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.154232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.162816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fe2e8 00:23:03.750 [2024-12-16 10:10:02.163592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.163641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.172473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ddc00 00:23:03.750 [2024-12-16 10:10:02.173529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.173562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.182529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fe720 00:23:03.750 [2024-12-16 10:10:02.183165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.183258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.192135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fef90 00:23:03.750 [2024-12-16 10:10:02.192982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.193060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.201538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ddc00 00:23:03.750 [2024-12-16 10:10:02.202344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.202400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.210912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e5658 00:23:03.750 [2024-12-16 10:10:02.211703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.211752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.220176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e12d8 00:23:03.750 [2024-12-16 10:10:02.220939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.220989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.229592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ebfd0 00:23:03.750 [2024-12-16 10:10:02.230354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.230396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.239102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ed4e8 00:23:03.750 [2024-12-16 10:10:02.239850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.239901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.248431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f2948 00:23:03.750 [2024-12-16 10:10:02.249142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.249192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.258125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e4578 00:23:03.750 [2024-12-16 10:10:02.258746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.258827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.267134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f2d80 00:23:03.750 [2024-12-16 10:10:02.268146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.268197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.276740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f2948 00:23:03.750 [2024-12-16 10:10:02.277662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.277711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.286096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e73e0 00:23:03.750 [2024-12-16 10:10:02.287057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.287106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.295008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ed0b0 00:23:03.750 [2024-12-16 10:10:02.296209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.296257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.305223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f35f0 00:23:03.750 [2024-12-16 10:10:02.305973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.306021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.313482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f9f68 00:23:03.750 [2024-12-16 10:10:02.313700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.313719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.324640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fa3a0 00:23:03.750 [2024-12-16 10:10:02.326289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.326330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.333270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f3e60 00:23:03.750 [2024-12-16 10:10:02.334554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.334605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.343103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f8a50 00:23:03.750 [2024-12-16 10:10:02.343592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.343629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.355023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ecc78 00:23:03.750 [2024-12-16 10:10:02.356116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.356163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:03.750 [2024-12-16 10:10:02.362293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190eee38 00:23:03.750 [2024-12-16 10:10:02.362490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:03.750 [2024-12-16 10:10:02.362509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.373500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fa7d8 00:23:04.010 [2024-12-16 10:10:02.374230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.010 [2024-12-16 10:10:02.374267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.382141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ec408 00:23:04.010 [2024-12-16 10:10:02.383324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.010 [2024-12-16 10:10:02.383402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.391854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f9b30 00:23:04.010 [2024-12-16 10:10:02.392380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.010 [2024-12-16 10:10:02.392425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.401641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f0788 00:23:04.010 [2024-12-16 10:10:02.402335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.010 [2024-12-16 10:10:02.402411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.411321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ea680 00:23:04.010 [2024-12-16 10:10:02.412702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.010 [2024-12-16 10:10:02.412784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.421177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ebfd0 00:23:04.010 [2024-12-16 10:10:02.422089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.010 [2024-12-16 10:10:02.422124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.430474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e38d0 00:23:04.010 [2024-12-16 10:10:02.431739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.010 [2024-12-16 10:10:02.431787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.441187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e1b48 00:23:04.010 [2024-12-16 10:10:02.442633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.010 [2024-12-16 10:10:02.442674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.452354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f92c0 00:23:04.010 [2024-12-16 10:10:02.453605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.010 [2024-12-16 10:10:02.453642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.462853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f1ca0 00:23:04.010 [2024-12-16 10:10:02.464011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.010 [2024-12-16 10:10:02.464060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.472944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f8e88 00:23:04.010 [2024-12-16 10:10:02.474125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.010 [2024-12-16 10:10:02.474162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.483187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f6cc8 00:23:04.010 [2024-12-16 10:10:02.484113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.010 [2024-12-16 10:10:02.484166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.494806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f7100 00:23:04.010 [2024-12-16 10:10:02.495720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.010 [2024-12-16 10:10:02.495769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.504221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190de8a8 00:23:04.010 [2024-12-16 10:10:02.505542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.010 [2024-12-16 10:10:02.505594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.514263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f4298 00:23:04.010 [2024-12-16 10:10:02.514958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.010 [2024-12-16 10:10:02.515005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:04.010 [2024-12-16 10:10:02.526665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e9e10 00:23:04.011 [2024-12-16 10:10:02.527901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.011 [2024-12-16 10:10:02.527951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:04.011 [2024-12-16 10:10:02.534008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ebfd0 00:23:04.011 [2024-12-16 10:10:02.534418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.011 [2024-12-16 10:10:02.534464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:04.011 [2024-12-16 10:10:02.546508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e1710 00:23:04.011 [2024-12-16 10:10:02.547524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.011 [2024-12-16 10:10:02.547573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:04.011 [2024-12-16 10:10:02.555268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ed0b0 00:23:04.011 [2024-12-16 10:10:02.556399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.011 [2024-12-16 10:10:02.556458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:04.011 [2024-12-16 10:10:02.565497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190eb760 00:23:04.011 [2024-12-16 10:10:02.566088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.011 [2024-12-16 10:10:02.566129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:04.011 [2024-12-16 10:10:02.577446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f7da8 00:23:04.011 [2024-12-16 10:10:02.578645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.011 [2024-12-16 10:10:02.578761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:04.011 [2024-12-16 10:10:02.584815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e01f8 00:23:04.011 [2024-12-16 10:10:02.585131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.011 [2024-12-16 10:10:02.585154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:04.011 [2024-12-16 10:10:02.596668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e2c28 00:23:04.011 [2024-12-16 10:10:02.597634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.011 [2024-12-16 10:10:02.597683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:04.011 [2024-12-16 10:10:02.605522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fa7d8 00:23:04.011 [2024-12-16 10:10:02.606909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.011 [2024-12-16 10:10:02.606965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:04.011 [2024-12-16 10:10:02.615591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e1710 00:23:04.011 [2024-12-16 10:10:02.616564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.011 [2024-12-16 10:10:02.616598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:04.011 [2024-12-16 10:10:02.624463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e2c28 00:23:04.011 [2024-12-16 10:10:02.625205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.011 [2024-12-16 10:10:02.625256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.634536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f1868 00:23:04.271 [2024-12-16 10:10:02.635699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.635748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.644920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f5be8 00:23:04.271 [2024-12-16 10:10:02.645275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.645314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.654970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e12d8 00:23:04.271 [2024-12-16 10:10:02.655943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.655993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.664146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f57b0 00:23:04.271 [2024-12-16 10:10:02.664318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.664339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.675818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fef90 00:23:04.271 [2024-12-16 10:10:02.676543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.676594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.684575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f2510 00:23:04.271 [2024-12-16 10:10:02.685600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.685650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.693571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190eb328 00:23:04.271 [2024-12-16 10:10:02.693757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.693776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.705450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ebb98 00:23:04.271 [2024-12-16 10:10:02.706601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.706651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.713727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190feb58 00:23:04.271 [2024-12-16 10:10:02.714987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.715037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.723286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f9f68 00:23:04.271 [2024-12-16 10:10:02.724040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.724089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.732252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e8088 00:23:04.271 [2024-12-16 10:10:02.733400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.733478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.741709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e7c50 00:23:04.271 [2024-12-16 10:10:02.742289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.742326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.753337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e99d8 00:23:04.271 [2024-12-16 10:10:02.754457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.754506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.761710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e3498 00:23:04.271 [2024-12-16 10:10:02.762888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.762938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.771259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e27f0 00:23:04.271 [2024-12-16 10:10:02.771912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.771974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.780963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e5ec8 00:23:04.271 [2024-12-16 10:10:02.781627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.781675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.789891] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e3060 00:23:04.271 [2024-12-16 10:10:02.790933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.790981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.798663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e49b0 00:23:04.271 [2024-12-16 10:10:02.799524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.799572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.809716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e01f8 00:23:04.271 [2024-12-16 10:10:02.810564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.810628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.819183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fda78 00:23:04.271 [2024-12-16 10:10:02.820114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.820161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.828476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e84c0 00:23:04.271 [2024-12-16 10:10:02.829647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.829694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.839102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e6fa8 00:23:04.271 [2024-12-16 10:10:02.840234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.840281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.846280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190eb328 00:23:04.271 [2024-12-16 10:10:02.846485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.846505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:04.271 [2024-12-16 10:10:02.857750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ec408 00:23:04.271 [2024-12-16 10:10:02.858697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.271 [2024-12-16 10:10:02.858743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:04.272 [2024-12-16 10:10:02.866758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e73e0 00:23:04.272 [2024-12-16 10:10:02.868033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.272 [2024-12-16 10:10:02.868080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:04.272 [2024-12-16 10:10:02.876245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190eb328 00:23:04.272 [2024-12-16 10:10:02.876713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.272 [2024-12-16 10:10:02.876750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:04.272 [2024-12-16 10:10:02.885858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f6458 00:23:04.272 [2024-12-16 10:10:02.886608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.272 [2024-12-16 10:10:02.886660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:04.531 [2024-12-16 10:10:02.895649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e1710 00:23:04.531 [2024-12-16 10:10:02.896423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.531 [2024-12-16 10:10:02.896469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:04.531 [2024-12-16 10:10:02.905407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f7100 00:23:04.531 [2024-12-16 10:10:02.906389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.531 [2024-12-16 10:10:02.906465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:04.531 [2024-12-16 10:10:02.916668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ec840 00:23:04.531 [2024-12-16 10:10:02.917655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.531 [2024-12-16 10:10:02.917703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:04.531 [2024-12-16 10:10:02.926994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190eb328 00:23:04.531 [2024-12-16 10:10:02.927922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.531 [2024-12-16 10:10:02.927976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:04.531 [2024-12-16 10:10:02.939619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e3060 00:23:04.531 [2024-12-16 10:10:02.940566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.531 [2024-12-16 10:10:02.940603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:04.531 [2024-12-16 10:10:02.951665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190eff18 00:23:04.531 [2024-12-16 10:10:02.952564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.531 [2024-12-16 10:10:02.952601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:04.531 [2024-12-16 10:10:02.963369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190feb58 00:23:04.531 [2024-12-16 10:10:02.964694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.531 [2024-12-16 10:10:02.964727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:04.531 [2024-12-16 10:10:02.970591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f3e60 00:23:04.531 [2024-12-16 10:10:02.971857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.531 [2024-12-16 10:10:02.971906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:04.531 [2024-12-16 10:10:02.982341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f92c0 00:23:04.531 [2024-12-16 10:10:02.983297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.531 [2024-12-16 10:10:02.983376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:04.531 [2024-12-16 10:10:02.991738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e6b70 00:23:04.531 [2024-12-16 10:10:02.992764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.531 [2024-12-16 10:10:02.992814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.531 [2024-12-16 10:10:03.003084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e95a0 00:23:04.532 [2024-12-16 10:10:03.004388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.004435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:04.532 [2024-12-16 10:10:03.016565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190df550 00:23:04.532 [2024-12-16 10:10:03.017865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.017914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:04.532 [2024-12-16 10:10:03.024523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e6738 00:23:04.532 [2024-12-16 10:10:03.024853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.024890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:04.532 [2024-12-16 10:10:03.037647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e27f0 00:23:04.532 [2024-12-16 10:10:03.038673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.038739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:04.532 [2024-12-16 10:10:03.047412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fb480 00:23:04.532 [2024-12-16 10:10:03.048816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.048864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:04.532 [2024-12-16 10:10:03.057622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fac10 00:23:04.532 [2024-12-16 10:10:03.058305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.058342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:04.532 [2024-12-16 10:10:03.069194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ecc78 00:23:04.532 [2024-12-16 10:10:03.070536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.070586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:04.532 [2024-12-16 10:10:03.076278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ef270 00:23:04.532 [2024-12-16 10:10:03.076665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.076700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:04.532 [2024-12-16 10:10:03.087732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e9e10 00:23:04.532 [2024-12-16 10:10:03.088778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.088824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.532 [2024-12-16 10:10:03.094860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f35f0 00:23:04.532 [2024-12-16 10:10:03.094960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.094979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:04.532 [2024-12-16 10:10:03.106248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fc998 00:23:04.532 [2024-12-16 10:10:03.107031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.107094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:04.532 [2024-12-16 10:10:03.115226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f0ff8 00:23:04.532 [2024-12-16 10:10:03.116416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.116494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:04.532 [2024-12-16 10:10:03.124782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fef90 00:23:04.532 [2024-12-16 10:10:03.125144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.125180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:04.532 [2024-12-16 10:10:03.134194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fef90 00:23:04.532 [2024-12-16 10:10:03.134556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.134593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:04.532 [2024-12-16 10:10:03.143885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fd208 00:23:04.532 [2024-12-16 10:10:03.144440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.144489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:04.532 [2024-12-16 10:10:03.153599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fe720 00:23:04.532 [2024-12-16 10:10:03.153922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.532 [2024-12-16 10:10:03.153953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.164455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190eb760 00:23:04.792 [2024-12-16 10:10:03.165775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.165824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.174360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e9168 00:23:04.792 [2024-12-16 10:10:03.175082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.175131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.184250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fb8b8 00:23:04.792 [2024-12-16 10:10:03.185013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.185062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.193292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fd208 00:23:04.792 [2024-12-16 10:10:03.194869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.194922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.202237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f5be8 00:23:04.792 [2024-12-16 10:10:03.203120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.203169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.211597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e38d0 00:23:04.792 [2024-12-16 10:10:03.212461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.212534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.222881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e49b0 00:23:04.792 [2024-12-16 10:10:03.223763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.223809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.231886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190de038 00:23:04.792 [2024-12-16 10:10:03.233131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.233178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.241459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190efae0 00:23:04.792 [2024-12-16 10:10:03.241862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.241899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.251135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fbcf0 00:23:04.792 [2024-12-16 10:10:03.252524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.252572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.262274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e7c50 00:23:04.792 [2024-12-16 10:10:03.264044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.264093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.270908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e3060 00:23:04.792 [2024-12-16 10:10:03.272177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.272224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.280590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f81e0 00:23:04.792 [2024-12-16 10:10:03.281192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.281268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.290229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f6020 00:23:04.792 [2024-12-16 10:10:03.290930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.290979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.299588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ec840 00:23:04.792 [2024-12-16 10:10:03.300958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.301007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.309361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f7100 00:23:04.792 [2024-12-16 10:10:03.310325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.310372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.319384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e23b8 00:23:04.792 [2024-12-16 10:10:03.320922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.320969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.328823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f81e0 00:23:04.792 [2024-12-16 10:10:03.330342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.330404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.338226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f5be8 00:23:04.792 [2024-12-16 10:10:03.339711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.339759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.347679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f1430 00:23:04.792 [2024-12-16 10:10:03.348942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.348991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.358920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190ed0b0 00:23:04.792 [2024-12-16 10:10:03.360193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.360240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.365956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f4f40 00:23:04.792 [2024-12-16 10:10:03.366378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.366418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.376620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190f1868 00:23:04.792 [2024-12-16 10:10:03.377556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.377604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.385544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190dfdc0 00:23:04.792 [2024-12-16 10:10:03.386862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.386912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:04.792 [2024-12-16 10:10:03.394959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190de038 00:23:04.792 [2024-12-16 10:10:03.395724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.792 [2024-12-16 10:10:03.395787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:04.793 [2024-12-16 10:10:03.404962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190fda78 00:23:04.793 [2024-12-16 10:10:03.406201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.793 [2024-12-16 10:10:03.406237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:04.793 [2024-12-16 10:10:03.414284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a0e0) with pdu=0x2000190e0ea0 00:23:05.051 [2024-12-16 10:10:03.415210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.051 [2024-12-16 10:10:03.415258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:05.051 00:23:05.051 Latency(us) 00:23:05.051 [2024-12-16T10:10:03.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.051 [2024-12-16T10:10:03.676Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:05.051 nvme0n1 : 2.00 25982.73 101.50 0.00 0.00 4921.46 1809.69 14954.12 00:23:05.051 [2024-12-16T10:10:03.676Z] =================================================================================================================== 00:23:05.051 [2024-12-16T10:10:03.676Z] Total : 25982.73 101.50 0.00 0.00 4921.46 1809.69 14954.12 00:23:05.051 0 00:23:05.051 10:10:03 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:05.051 10:10:03 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:05.051 | .driver_specific 00:23:05.051 | .nvme_error 00:23:05.051 | .status_code 00:23:05.051 | .command_transient_transport_error' 00:23:05.051 10:10:03 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:05.051 10:10:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:05.310 10:10:03 -- host/digest.sh@71 -- # (( 204 > 0 )) 00:23:05.310 10:10:03 -- host/digest.sh@73 -- # killprocess 97867 00:23:05.310 10:10:03 -- common/autotest_common.sh@936 -- # '[' -z 97867 ']' 00:23:05.310 10:10:03 -- common/autotest_common.sh@940 -- # kill -0 97867 00:23:05.310 10:10:03 -- common/autotest_common.sh@941 -- # uname 00:23:05.310 10:10:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:05.310 10:10:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97867 00:23:05.310 10:10:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:05.310 10:10:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:05.310 killing process with pid 97867 00:23:05.310 10:10:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97867' 00:23:05.310 10:10:03 -- common/autotest_common.sh@955 -- # kill 97867 00:23:05.310 Received shutdown signal, test time was about 2.000000 seconds 00:23:05.310 00:23:05.310 Latency(us) 00:23:05.310 [2024-12-16T10:10:03.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.310 [2024-12-16T10:10:03.935Z] =================================================================================================================== 00:23:05.310 [2024-12-16T10:10:03.935Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.310 10:10:03 -- common/autotest_common.sh@960 -- # wait 97867 00:23:05.569 10:10:03 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:23:05.569 10:10:03 -- host/digest.sh@54 -- # local rw bs qd 00:23:05.569 10:10:03 -- host/digest.sh@56 -- # rw=randwrite 00:23:05.569 10:10:03 -- host/digest.sh@56 -- # bs=131072 00:23:05.569 10:10:03 -- host/digest.sh@56 -- # qd=16 00:23:05.569 10:10:03 -- host/digest.sh@58 -- # bperfpid=97957 00:23:05.569 10:10:03 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:05.569 10:10:03 -- host/digest.sh@60 -- # waitforlisten 97957 /var/tmp/bperf.sock 00:23:05.569 10:10:03 -- common/autotest_common.sh@829 -- # '[' -z 97957 ']' 00:23:05.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:05.569 10:10:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:05.569 10:10:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.569 10:10:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:05.569 10:10:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.569 10:10:03 -- common/autotest_common.sh@10 -- # set +x 00:23:05.569 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:05.569 Zero copy mechanism will not be used. 00:23:05.569 [2024-12-16 10:10:03.978977] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:05.569 [2024-12-16 10:10:03.979065] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97957 ] 00:23:05.569 [2024-12-16 10:10:04.104936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.569 [2024-12-16 10:10:04.173929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.505 10:10:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:06.505 10:10:04 -- common/autotest_common.sh@862 -- # return 0 00:23:06.505 10:10:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:06.505 10:10:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:06.762 10:10:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:06.762 10:10:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.762 10:10:05 -- common/autotest_common.sh@10 -- # set +x 00:23:06.762 10:10:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.762 10:10:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:06.762 10:10:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:07.021 nvme0n1 00:23:07.021 10:10:05 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:07.021 10:10:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.021 10:10:05 -- common/autotest_common.sh@10 -- # set +x 00:23:07.021 10:10:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.021 10:10:05 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:07.021 10:10:05 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:07.021 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:07.021 Zero copy mechanism will not be used. 00:23:07.021 Running I/O for 2 seconds... 00:23:07.281 [2024-12-16 10:10:05.650694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.281 [2024-12-16 10:10:05.651058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.281 [2024-12-16 10:10:05.651091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.281 [2024-12-16 10:10:05.655037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.281 [2024-12-16 10:10:05.655220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.281 [2024-12-16 10:10:05.655243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.281 [2024-12-16 10:10:05.659065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.281 [2024-12-16 10:10:05.659201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.281 [2024-12-16 10:10:05.659223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.281 [2024-12-16 10:10:05.662937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.281 [2024-12-16 10:10:05.663054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.281 [2024-12-16 10:10:05.663077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.281 [2024-12-16 10:10:05.667011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.281 [2024-12-16 10:10:05.667124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.281 [2024-12-16 10:10:05.667145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.281 [2024-12-16 10:10:05.670934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.281 [2024-12-16 10:10:05.671048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.281 [2024-12-16 10:10:05.671070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.281 [2024-12-16 10:10:05.674904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.281 [2024-12-16 10:10:05.675050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.281 [2024-12-16 10:10:05.675072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.281 [2024-12-16 10:10:05.678868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.281 [2024-12-16 10:10:05.679097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.281 [2024-12-16 10:10:05.679134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.281 [2024-12-16 10:10:05.682745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.281 [2024-12-16 10:10:05.682972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.281 [2024-12-16 10:10:05.682993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.281 [2024-12-16 10:10:05.686666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.281 [2024-12-16 10:10:05.686883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.686904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.690697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.690829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.690850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.694528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.694646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.694668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.698328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.698497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.698519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.702140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.702276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.702298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.706099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.706235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.706258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.710119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.710362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.710409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.713972] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.714242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.714282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.717863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.718025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.718071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.721837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.721980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.722001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.725738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.725857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.725879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.729649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.729770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.729792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.733559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.733694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.733716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.737534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.737670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.737692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.741584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.741840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.741863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.745568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.745851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.745886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.750918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.751063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.751086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.754966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.755091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.755113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.758994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.759132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.759153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.762900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.763036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.763058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.766957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.767100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.767122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.770996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.771145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.771167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.775002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.775231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.775269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.779011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.779306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.779376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.282 [2024-12-16 10:10:05.783020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.282 [2024-12-16 10:10:05.783141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.282 [2024-12-16 10:10:05.783162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.786984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.787106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.787127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.790960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.791088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.791109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.794997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.795112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.795135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.798985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.799143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.799165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.802985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.803130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.803150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.807091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.807334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.807355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.811048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.811278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.811299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.814954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.815115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.815142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.818842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.818971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.818992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.822641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.822770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.822791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.826365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.826506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.826527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.830167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.830283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.830304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.834190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.834326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.834362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.838130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.838361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.838399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.842025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.842304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.842362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.845898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.846104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.846125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.849716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.849841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.849862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.853624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.853720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.853742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.857420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.857520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.857541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.861242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.861427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.861464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.865160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.865288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.865310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.869142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.869362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.869411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.873030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.873366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.873411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.876836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.876952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.876973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.880799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.880921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.880949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.884732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.884865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.884886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.888796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.888930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.283 [2024-12-16 10:10:05.888951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.283 [2024-12-16 10:10:05.892663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.283 [2024-12-16 10:10:05.892816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.284 [2024-12-16 10:10:05.892837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.284 [2024-12-16 10:10:05.896509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.284 [2024-12-16 10:10:05.896658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.284 [2024-12-16 10:10:05.896679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.284 [2024-12-16 10:10:05.900551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.284 [2024-12-16 10:10:05.900757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.284 [2024-12-16 10:10:05.900829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.904511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.904814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.904849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.908481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.908602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.908624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.912482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.912612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.912633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.916453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.916577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.916597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.920352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.920521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.920542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.924420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.924572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.924593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.928503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.928656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.928677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.932540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.932810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.932850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.936394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.936710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.936750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.940463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.940587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.940608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.944422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.944544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.944566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.948508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.948626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.948647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.952594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.952715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.952735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.956583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.956744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.956764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.960639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.960786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.960809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.964643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.964867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.964926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.968641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.968893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.968923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.972615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.972788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.972825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.976636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.976759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.976780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.980517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.980621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.980642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.984485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.984605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.984625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.989234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.989478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.989501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.544 [2024-12-16 10:10:05.993947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.544 [2024-12-16 10:10:05.994147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.544 [2024-12-16 10:10:05.994171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:05.998502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:05.998726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:05.998761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.002587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.002818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.002871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.006671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.006909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.006930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.010540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.010680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.010701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.014332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.014502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.014523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.018219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.018340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.018361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.022269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.022466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.022487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.026089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.026224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.026246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.030257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.030500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.030537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.034251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.034589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.034626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.038224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.038436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.038458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.042114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.042215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.042236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.046274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.046407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.046430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.050564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.050665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.050702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.055065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.055249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.055272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.059448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.059659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.059683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.064125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.064390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.064415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.068532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.068874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.068909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.073021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.073230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.073251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.077349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.077530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.077553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.081608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.081760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.081795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.085901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.086033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.086082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.090211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.090423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.090446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.094243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.094442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.094464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.098198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.098417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.098486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.102010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.102250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.102272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.105900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.106146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.545 [2024-12-16 10:10:06.106169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.545 [2024-12-16 10:10:06.109792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.545 [2024-12-16 10:10:06.109903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.546 [2024-12-16 10:10:06.109925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.546 [2024-12-16 10:10:06.113633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.546 [2024-12-16 10:10:06.113764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.546 [2024-12-16 10:10:06.113800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.546 [2024-12-16 10:10:06.117322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.546 [2024-12-16 10:10:06.117483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.546 [2024-12-16 10:10:06.117506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.546 [2024-12-16 10:10:06.121531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.546 [2024-12-16 10:10:06.121715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.546 [2024-12-16 10:10:06.121751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.546 [2024-12-16 10:10:06.125718] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.546 [2024-12-16 10:10:06.125933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.546 [2024-12-16 10:10:06.125955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.546 [2024-12-16 10:10:06.130341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.546 [2024-12-16 10:10:06.130641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.546 [2024-12-16 10:10:06.130666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.546 [2024-12-16 10:10:06.134748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.546 [2024-12-16 10:10:06.134962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.546 [2024-12-16 10:10:06.134984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.546 [2024-12-16 10:10:06.139077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.546 [2024-12-16 10:10:06.139281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.546 [2024-12-16 10:10:06.139302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.546 [2024-12-16 10:10:06.143478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.546 [2024-12-16 10:10:06.143613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.546 [2024-12-16 10:10:06.143636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.546 [2024-12-16 10:10:06.147846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.546 [2024-12-16 10:10:06.147986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.546 [2024-12-16 10:10:06.148007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.546 [2024-12-16 10:10:06.152064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.546 [2024-12-16 10:10:06.152178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.546 [2024-12-16 10:10:06.152200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.546 [2024-12-16 10:10:06.156247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.546 [2024-12-16 10:10:06.156424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.546 [2024-12-16 10:10:06.156461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.546 [2024-12-16 10:10:06.160523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.546 [2024-12-16 10:10:06.160750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.546 [2024-12-16 10:10:06.160771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.546 [2024-12-16 10:10:06.164681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.546 [2024-12-16 10:10:06.164939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.546 [2024-12-16 10:10:06.164960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.168887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.169114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.169136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.172975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.173173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.173194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.176960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.177078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.177100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.181046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.181171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.181192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.184926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.185059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.185080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.189073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.189244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.189265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.193285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.193461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.193483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.197404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.197634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.197678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.201301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.201525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.201546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.205228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.205435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.205456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.209302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.209513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.209536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.213567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.213697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.213720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.217581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.217718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.217740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.221799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.221975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.221997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.225889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.226114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.226144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.230215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.230497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.806 [2024-12-16 10:10:06.230536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.806 [2024-12-16 10:10:06.234509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.806 [2024-12-16 10:10:06.234745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.234768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.238575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.238779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.238801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.242705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.242846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.242868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.246648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.246760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.246782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.250683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.250814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.250835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.254816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.254995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.255018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.258908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.259080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.259101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.263051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.263289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.263311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.267124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.267332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.267354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.271112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.271311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.271332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.275108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.275232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.275253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.279277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.279440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.279463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.283324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.283452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.283474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.287429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.287580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.287601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.291344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.291509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.291530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.295468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.295711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.295733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.299667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.299911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.299950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.303695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.303869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.303890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.307715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.307847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.307869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.311740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.311911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.311933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.315820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.315937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.315958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.319791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.319951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.319973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.323891] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.324053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.324075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.328069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.328325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.328347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.332130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.332366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.332387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.336246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.336444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.336465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.340365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.807 [2024-12-16 10:10:06.340496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.807 [2024-12-16 10:10:06.340518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.807 [2024-12-16 10:10:06.344627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.344767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.344790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.348595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.348725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.348747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.352603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.352773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.352794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.356622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.356793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.356814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.360791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.361011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.361032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.365193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.365423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.365459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.369357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.369574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.369597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.373555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.373676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.373699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.377607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.377708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.377730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.381519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.381620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.381642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.385571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.385742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.385764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.389547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.389691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.389713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.393738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.393986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.394008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.397720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.397911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.397931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.401622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.401785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.401808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.405634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.405761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.405783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.409559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.409691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.409713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.413415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.413535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.413557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.417540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.417703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.417741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.421448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.421583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.421604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:07.808 [2024-12-16 10:10:06.425508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:07.808 [2024-12-16 10:10:06.425751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.808 [2024-12-16 10:10:06.425773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.068 [2024-12-16 10:10:06.429417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.068 [2024-12-16 10:10:06.429633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.068 [2024-12-16 10:10:06.429670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.068 [2024-12-16 10:10:06.433475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.068 [2024-12-16 10:10:06.433654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.068 [2024-12-16 10:10:06.433676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.068 [2024-12-16 10:10:06.437326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.068 [2024-12-16 10:10:06.437475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.068 [2024-12-16 10:10:06.437496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.441434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.441539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.441561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.445505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.445603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.445623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.449604] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.449773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.449794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.453662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.453865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.453887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.457894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.458148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.458171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.462094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.462284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.462305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.466034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.466237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.466259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.470140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.470255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.470276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.474301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.474437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.474474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.478251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.478353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.478375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.482330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.482512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.482534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.486503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.486677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.486699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.490810] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.491048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.491069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.494959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.495238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.495294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.499077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.499298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.499320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.503193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.503319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.503341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.507484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.507602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.507624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.511445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.511564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.511585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.515692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.515862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.515883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.519818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.519980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.520002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.524123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.524350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.524371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.528306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.528533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.528555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.532528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.532713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.532750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.536772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.536948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.536970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.540887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.541022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.541044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.545146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.545298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.545320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.549494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.069 [2024-12-16 10:10:06.549654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-12-16 10:10:06.549676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.069 [2024-12-16 10:10:06.553638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.553800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.553837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.557983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.558254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.558278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.562351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.562600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.562639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.566597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.566787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.566810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.570747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.570866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.570888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.574958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.575078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.575100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.579048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.579165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.579186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.583288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.583513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.583536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.587527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.587692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.587714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.591772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.592019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.592065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.596000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.596220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.596242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.600176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.600375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.600411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.604411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.604539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.604562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.608542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.608644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.608666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.612750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.612868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.612891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.616926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.617102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.617124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.620998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.621159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.621181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.625268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.625504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.625526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.629310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.629545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.629568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.633340] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.633513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.633536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.637218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.637357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.637379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.641112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.641236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.641257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.645128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.645241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.645262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.649173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.649338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.649359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.653120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.653282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.653303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.657187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.657415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.657443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.661177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.661399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.661434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.665178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.070 [2024-12-16 10:10:06.665375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-12-16 10:10:06.665409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.070 [2024-12-16 10:10:06.669087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.071 [2024-12-16 10:10:06.669227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-12-16 10:10:06.669248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.071 [2024-12-16 10:10:06.673018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.071 [2024-12-16 10:10:06.673151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-12-16 10:10:06.673172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.071 [2024-12-16 10:10:06.677066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.071 [2024-12-16 10:10:06.677201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-12-16 10:10:06.677222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.071 [2024-12-16 10:10:06.681032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.071 [2024-12-16 10:10:06.681200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-12-16 10:10:06.681222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.071 [2024-12-16 10:10:06.685004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.071 [2024-12-16 10:10:06.685159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-12-16 10:10:06.685180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.071 [2024-12-16 10:10:06.689037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.071 [2024-12-16 10:10:06.689253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-12-16 10:10:06.689274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.335 [2024-12-16 10:10:06.692972] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.335 [2024-12-16 10:10:06.693178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.335 [2024-12-16 10:10:06.693198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.335 [2024-12-16 10:10:06.696966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.335 [2024-12-16 10:10:06.697159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.697187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.700904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.701036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.701057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.704765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.704885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.704907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.708629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.708762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.708783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.712548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.712715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.712736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.716485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.716660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.716680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.720594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.720835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.720856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.724459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.724785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.724822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.728267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.728390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.728425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.732264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.732421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.732441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.736201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.736325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.736345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.740084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.740234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.740255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.744230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.744399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.744421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.748146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.748288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.748309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.752087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.752305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.752326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.756066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.756282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.756304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.760030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.760230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.760252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.763961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.764078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.764099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.767858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.767970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.767991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.771672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.771777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.771799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.775748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.775934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.336 [2024-12-16 10:10:06.775957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.336 [2024-12-16 10:10:06.779552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.336 [2024-12-16 10:10:06.779692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.779713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.783444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.783682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.783733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.787456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.787666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.787703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.791155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.791348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.791369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.795052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.795168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.795189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.798923] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.799040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.799061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.802858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.802969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.802990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.806796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.806959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.806980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.810652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.810824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.810845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.814647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.814866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.814887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.818516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.818796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.818855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.822300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.822425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.822445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.826327] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.826461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.826483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.830239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.830341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.830362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.834183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.834298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.834320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.838032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.838235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.838257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.841894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.842079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.842101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.845951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.846205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.337 [2024-12-16 10:10:06.846228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.337 [2024-12-16 10:10:06.849850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.337 [2024-12-16 10:10:06.850080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.850102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.853792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.853989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.854009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.857693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.857831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.857852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.861583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.861708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.861729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.865467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.865582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.865610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.869341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.869522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.869543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.873156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.873318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.873338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.877167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.877384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.877419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.880989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.881195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.881216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.885011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.885197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.885217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.888907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.889022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.889043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.892788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.892901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.892921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.896803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.896916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.896937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.900722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.900872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.900893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.904630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.904801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.904822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.908577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.908795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.908816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.912385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.912596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.912617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.916222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.916426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.916447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.920109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.338 [2024-12-16 10:10:06.920226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.338 [2024-12-16 10:10:06.920247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.338 [2024-12-16 10:10:06.923987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.339 [2024-12-16 10:10:06.924106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.339 [2024-12-16 10:10:06.924128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.339 [2024-12-16 10:10:06.927895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.339 [2024-12-16 10:10:06.928034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.339 [2024-12-16 10:10:06.928055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.339 [2024-12-16 10:10:06.931829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.339 [2024-12-16 10:10:06.931997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.339 [2024-12-16 10:10:06.932018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.339 [2024-12-16 10:10:06.935767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.339 [2024-12-16 10:10:06.935997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.339 [2024-12-16 10:10:06.936019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.339 [2024-12-16 10:10:06.939710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.339 [2024-12-16 10:10:06.939932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.339 [2024-12-16 10:10:06.939953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.339 [2024-12-16 10:10:06.943662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.339 [2024-12-16 10:10:06.943899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.339 [2024-12-16 10:10:06.943926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.339 [2024-12-16 10:10:06.947613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.339 [2024-12-16 10:10:06.947811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.339 [2024-12-16 10:10:06.947832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.339 [2024-12-16 10:10:06.951407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.339 [2024-12-16 10:10:06.951582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.339 [2024-12-16 10:10:06.951603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:06.955323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:06.955464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:06.955485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:06.959230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:06.959367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:06.959402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:06.963171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:06.963341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:06.963362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:06.967109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:06.967259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:06.967280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:06.971003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:06.971221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:06.971246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:06.974948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:06.975172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:06.975192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:06.978880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:06.979061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:06.979082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:06.982736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:06.982874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:06.982895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:06.986675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:06.986791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:06.986813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:06.990587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:06.990703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:06.990723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:06.994548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:06.994717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:06.994738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:06.998323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:06.998489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:06.998510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:07.002392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:07.002619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:07.002641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:07.006344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:07.006611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:07.006664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:07.010322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:07.010525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:07.010547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:07.014396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:07.014521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:07.014542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:07.018338] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:07.018491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.602 [2024-12-16 10:10:07.018512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.602 [2024-12-16 10:10:07.022266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.602 [2024-12-16 10:10:07.022351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.022388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.026032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.026223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.026244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.029924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.030115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.030137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.034025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.034264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.034287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.037848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.038107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.038129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.041860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.042058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.042080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.045952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.046109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.046131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.049911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.050042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.050073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.053800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.053911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.053931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.057774] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.057942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.057964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.061644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.061782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.061803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.065701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.065965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.065988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.069879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.070102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.070125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.074096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.074294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.074318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.078397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.078549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.078571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.082850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.082966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.082988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.087316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.087473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.087497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.091736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.091970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.091991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.096104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.096263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.096285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.100571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.100846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.100900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.104876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.105095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.105116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.108914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.109099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.109120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.112988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.113100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.113121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.116970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.117090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.117111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.120874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.120986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.121007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.603 [2024-12-16 10:10:07.124885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.603 [2024-12-16 10:10:07.125064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.603 [2024-12-16 10:10:07.125085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.128855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.129017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.129038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.132841] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.133059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.133080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.136892] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.137165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.137230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.140948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.141094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.141116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.144973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.145102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.145123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.148934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.149055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.149076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.152838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.152967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.152987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.156808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.156977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.156997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.160800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.160956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.160978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.164787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.165006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.165027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.168714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.168993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.169047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.172997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.173209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.173229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.176903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.177046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.177067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.180948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.181072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.181092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.185002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.185124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.185145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.189096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.189265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.189287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.192975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.193142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.193163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.197004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.197222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.197243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.200980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.201190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.201211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.205050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.205257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.205278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.208999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.209111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.209131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.212971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.213095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.213116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.216895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.217016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.217037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.604 [2024-12-16 10:10:07.220885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.604 [2024-12-16 10:10:07.221061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.604 [2024-12-16 10:10:07.221083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.864 [2024-12-16 10:10:07.224968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.864 [2024-12-16 10:10:07.225131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.864 [2024-12-16 10:10:07.225168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.864 [2024-12-16 10:10:07.229188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.864 [2024-12-16 10:10:07.229451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.864 [2024-12-16 10:10:07.229491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.864 [2024-12-16 10:10:07.233308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.864 [2024-12-16 10:10:07.233664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.864 [2024-12-16 10:10:07.233701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.864 [2024-12-16 10:10:07.237364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.864 [2024-12-16 10:10:07.237562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.864 [2024-12-16 10:10:07.237583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.864 [2024-12-16 10:10:07.241345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.864 [2024-12-16 10:10:07.241511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.864 [2024-12-16 10:10:07.241532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.864 [2024-12-16 10:10:07.245540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.864 [2024-12-16 10:10:07.245681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.864 [2024-12-16 10:10:07.245703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.864 [2024-12-16 10:10:07.249573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.864 [2024-12-16 10:10:07.249679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.249703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.253660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.253853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.253891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.257775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.257951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.257972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.261800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.262037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.262086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.265719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.266036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.266080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.269728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.269843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.269865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.273745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.273901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.273924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.277682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.277805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.277827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.281560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.281693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.281714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.285563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.285738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.285774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.289525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.289681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.289703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.293589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.293852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.293879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.297439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.297672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.297694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.301216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.301422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.301444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.305090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.305234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.305255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.309055] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.309180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.309201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.312989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.313104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.313125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.316918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.317102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.317123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.320844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.320989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.321009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.324858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.325077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.325098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.328838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.329037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.329057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.332768] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.332966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.332988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.336597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.336734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.336755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.340607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.340741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.340762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.344705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.344840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.344861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.348842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.349014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.349036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.352933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.353083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.353103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.357372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.357626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.865 [2024-12-16 10:10:07.357665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.865 [2024-12-16 10:10:07.361596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.865 [2024-12-16 10:10:07.361836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.361857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.365796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.366000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.366022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.370005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.370150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.370172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.374530] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.374667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.374708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.379334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.379474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.379510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.383961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.384132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.384170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.388416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.388607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.388645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.392993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.393215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.393237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.397380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.397648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.397690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.401613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.401849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.401871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.405835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.405995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.406017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.409938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.410064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.410085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.414219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.414326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.414349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.418514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.418706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.418728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.422636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.422809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.422830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.426810] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.427038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.427059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.430970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.431211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.431233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.435424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.435651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.435674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.439551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.439711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.439748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.443716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.443842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.443864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.447797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.447926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.447949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.452078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.452287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.452308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.456722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.456920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.456957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.461018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.461261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.461303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.465112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.465344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.465366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.469287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.469503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.469525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.473349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.866 [2024-12-16 10:10:07.473483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.866 [2024-12-16 10:10:07.473520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.866 [2024-12-16 10:10:07.477551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.867 [2024-12-16 10:10:07.477697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.867 [2024-12-16 10:10:07.477719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.867 [2024-12-16 10:10:07.481676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.867 [2024-12-16 10:10:07.481781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.867 [2024-12-16 10:10:07.481803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.867 [2024-12-16 10:10:07.485673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:08.867 [2024-12-16 10:10:07.485855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.867 [2024-12-16 10:10:07.485878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.126 [2024-12-16 10:10:07.489811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.126 [2024-12-16 10:10:07.489977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.126 [2024-12-16 10:10:07.489999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.126 [2024-12-16 10:10:07.493961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.126 [2024-12-16 10:10:07.494198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.126 [2024-12-16 10:10:07.494220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.126 [2024-12-16 10:10:07.498194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.126 [2024-12-16 10:10:07.498516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.126 [2024-12-16 10:10:07.498574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.126 [2024-12-16 10:10:07.502450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.126 [2024-12-16 10:10:07.502578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.126 [2024-12-16 10:10:07.502600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.126 [2024-12-16 10:10:07.506652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.126 [2024-12-16 10:10:07.506795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.126 [2024-12-16 10:10:07.506817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.126 [2024-12-16 10:10:07.510764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.126 [2024-12-16 10:10:07.510922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.126 [2024-12-16 10:10:07.510944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.126 [2024-12-16 10:10:07.515093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.126 [2024-12-16 10:10:07.515233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.126 [2024-12-16 10:10:07.515254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.126 [2024-12-16 10:10:07.519408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.126 [2024-12-16 10:10:07.519653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.126 [2024-12-16 10:10:07.519677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.126 [2024-12-16 10:10:07.523587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.126 [2024-12-16 10:10:07.523733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.126 [2024-12-16 10:10:07.523755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.126 [2024-12-16 10:10:07.527758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.126 [2024-12-16 10:10:07.527985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.126 [2024-12-16 10:10:07.528020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.126 [2024-12-16 10:10:07.531970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.126 [2024-12-16 10:10:07.532207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.126 [2024-12-16 10:10:07.532228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.536189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.536386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.536407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.540553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.540668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.540691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.544644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.544768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.544791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.548670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.548788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.548809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.552844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.553010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.553031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.556948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.557132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.557154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.561571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.561826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.561861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.565682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.565955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.565990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.569687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.569843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.569864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.573707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.573827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.573848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.577764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.577879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.577900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.581948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.582086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.582110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.586145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.586304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.586327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.590293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.590447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.590469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.594513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.594807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.594852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.598588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.598855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.598888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.602530] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.602708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.602745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.606962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.607087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.607109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.611271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.611415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.611437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.615420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.615552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.615573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.619505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.619677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.619698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.623487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.623652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.623673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.627428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.627668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.627690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.631481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.631698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.631720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.635443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.635629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.635650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.639547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.639673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.639694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.127 [2024-12-16 10:10:07.643713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1a280) with pdu=0x2000190fef90 00:23:09.127 [2024-12-16 10:10:07.643842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.127 [2024-12-16 10:10:07.643862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.127 00:23:09.127 Latency(us) 00:23:09.127 [2024-12-16T10:10:07.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.128 [2024-12-16T10:10:07.753Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:09.128 nvme0n1 : 2.00 7630.53 953.82 0.00 0.00 2091.91 1608.61 5213.09 00:23:09.128 [2024-12-16T10:10:07.753Z] =================================================================================================================== 00:23:09.128 [2024-12-16T10:10:07.753Z] Total : 7630.53 953.82 0.00 0.00 2091.91 1608.61 5213.09 00:23:09.128 0 00:23:09.128 10:10:07 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:09.128 10:10:07 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:09.128 10:10:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:09.128 10:10:07 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:09.128 | .driver_specific 00:23:09.128 | .nvme_error 00:23:09.128 | .status_code 00:23:09.128 | .command_transient_transport_error' 00:23:09.386 10:10:07 -- host/digest.sh@71 -- # (( 492 > 0 )) 00:23:09.386 10:10:07 -- host/digest.sh@73 -- # killprocess 97957 00:23:09.386 10:10:07 -- common/autotest_common.sh@936 -- # '[' -z 97957 ']' 00:23:09.386 10:10:07 -- common/autotest_common.sh@940 -- # kill -0 97957 00:23:09.386 10:10:07 -- common/autotest_common.sh@941 -- # uname 00:23:09.386 10:10:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:09.386 10:10:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97957 00:23:09.386 10:10:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:09.386 killing process with pid 97957 00:23:09.386 10:10:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:09.386 10:10:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97957' 00:23:09.386 Received shutdown signal, test time was about 2.000000 seconds 00:23:09.386 00:23:09.386 Latency(us) 00:23:09.386 [2024-12-16T10:10:08.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.386 [2024-12-16T10:10:08.011Z] =================================================================================================================== 00:23:09.386 [2024-12-16T10:10:08.011Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.386 10:10:07 -- common/autotest_common.sh@955 -- # kill 97957 00:23:09.386 10:10:07 -- common/autotest_common.sh@960 -- # wait 97957 00:23:09.644 10:10:08 -- host/digest.sh@115 -- # killprocess 97648 00:23:09.644 10:10:08 -- common/autotest_common.sh@936 -- # '[' -z 97648 ']' 00:23:09.644 10:10:08 -- common/autotest_common.sh@940 -- # kill -0 97648 00:23:09.644 10:10:08 -- common/autotest_common.sh@941 -- # uname 00:23:09.644 10:10:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:09.644 10:10:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97648 00:23:09.644 10:10:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:09.644 10:10:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:09.644 killing process with pid 97648 00:23:09.644 10:10:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97648' 00:23:09.645 10:10:08 -- common/autotest_common.sh@955 -- # kill 97648 00:23:09.645 10:10:08 -- common/autotest_common.sh@960 -- # wait 97648 00:23:09.903 00:23:09.903 real 0m18.357s 00:23:09.903 user 0m34.819s 00:23:09.903 sys 0m4.828s 00:23:09.903 10:10:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:09.903 10:10:08 -- common/autotest_common.sh@10 -- # set +x 00:23:09.903 ************************************ 00:23:09.903 END TEST nvmf_digest_error 00:23:09.903 ************************************ 00:23:09.903 10:10:08 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:23:09.903 10:10:08 -- host/digest.sh@139 -- # nvmftestfini 00:23:09.903 10:10:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:09.903 10:10:08 -- nvmf/common.sh@116 -- # sync 00:23:10.162 10:10:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:10.162 10:10:08 -- nvmf/common.sh@119 -- # set +e 00:23:10.162 10:10:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:10.162 10:10:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:10.162 rmmod nvme_tcp 00:23:10.162 rmmod nvme_fabrics 00:23:10.162 rmmod nvme_keyring 00:23:10.162 10:10:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:10.162 10:10:08 -- nvmf/common.sh@123 -- # set -e 00:23:10.162 10:10:08 -- nvmf/common.sh@124 -- # return 0 00:23:10.162 10:10:08 -- nvmf/common.sh@477 -- # '[' -n 97648 ']' 00:23:10.162 10:10:08 -- nvmf/common.sh@478 -- # killprocess 97648 00:23:10.162 10:10:08 -- common/autotest_common.sh@936 -- # '[' -z 97648 ']' 00:23:10.162 10:10:08 -- common/autotest_common.sh@940 -- # kill -0 97648 00:23:10.162 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (97648) - No such process 00:23:10.162 Process with pid 97648 is not found 00:23:10.162 10:10:08 -- common/autotest_common.sh@963 -- # echo 'Process with pid 97648 is not found' 00:23:10.162 10:10:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:10.162 10:10:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:10.162 10:10:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:10.162 10:10:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.162 10:10:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:10.162 10:10:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.162 10:10:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.162 10:10:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.162 10:10:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:10.162 00:23:10.162 real 0m35.726s 00:23:10.162 user 1m6.616s 00:23:10.162 sys 0m9.745s 00:23:10.162 10:10:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:10.162 10:10:08 -- common/autotest_common.sh@10 -- # set +x 00:23:10.162 ************************************ 00:23:10.162 END TEST nvmf_digest 00:23:10.162 ************************************ 00:23:10.162 10:10:08 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:23:10.162 10:10:08 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:23:10.162 10:10:08 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:10.162 10:10:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:10.162 10:10:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:10.162 10:10:08 -- common/autotest_common.sh@10 -- # set +x 00:23:10.162 ************************************ 00:23:10.162 START TEST nvmf_mdns_discovery 00:23:10.162 ************************************ 00:23:10.162 10:10:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:10.162 * Looking for test storage... 00:23:10.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:10.162 10:10:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:10.162 10:10:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:10.162 10:10:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:10.421 10:10:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:10.421 10:10:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:10.421 10:10:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:10.421 10:10:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:10.421 10:10:08 -- scripts/common.sh@335 -- # IFS=.-: 00:23:10.421 10:10:08 -- scripts/common.sh@335 -- # read -ra ver1 00:23:10.421 10:10:08 -- scripts/common.sh@336 -- # IFS=.-: 00:23:10.421 10:10:08 -- scripts/common.sh@336 -- # read -ra ver2 00:23:10.421 10:10:08 -- scripts/common.sh@337 -- # local 'op=<' 00:23:10.421 10:10:08 -- scripts/common.sh@339 -- # ver1_l=2 00:23:10.421 10:10:08 -- scripts/common.sh@340 -- # ver2_l=1 00:23:10.421 10:10:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:10.421 10:10:08 -- scripts/common.sh@343 -- # case "$op" in 00:23:10.421 10:10:08 -- scripts/common.sh@344 -- # : 1 00:23:10.421 10:10:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:10.421 10:10:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:10.421 10:10:08 -- scripts/common.sh@364 -- # decimal 1 00:23:10.421 10:10:08 -- scripts/common.sh@352 -- # local d=1 00:23:10.421 10:10:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:10.421 10:10:08 -- scripts/common.sh@354 -- # echo 1 00:23:10.421 10:10:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:10.421 10:10:08 -- scripts/common.sh@365 -- # decimal 2 00:23:10.421 10:10:08 -- scripts/common.sh@352 -- # local d=2 00:23:10.421 10:10:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:10.421 10:10:08 -- scripts/common.sh@354 -- # echo 2 00:23:10.421 10:10:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:10.421 10:10:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:10.421 10:10:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:10.421 10:10:08 -- scripts/common.sh@367 -- # return 0 00:23:10.421 10:10:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:10.421 10:10:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:10.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.421 --rc genhtml_branch_coverage=1 00:23:10.421 --rc genhtml_function_coverage=1 00:23:10.421 --rc genhtml_legend=1 00:23:10.421 --rc geninfo_all_blocks=1 00:23:10.421 --rc geninfo_unexecuted_blocks=1 00:23:10.421 00:23:10.421 ' 00:23:10.421 10:10:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:10.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.421 --rc genhtml_branch_coverage=1 00:23:10.421 --rc genhtml_function_coverage=1 00:23:10.421 --rc genhtml_legend=1 00:23:10.421 --rc geninfo_all_blocks=1 00:23:10.421 --rc geninfo_unexecuted_blocks=1 00:23:10.421 00:23:10.421 ' 00:23:10.421 10:10:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:10.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.421 --rc genhtml_branch_coverage=1 00:23:10.421 --rc genhtml_function_coverage=1 00:23:10.421 --rc genhtml_legend=1 00:23:10.421 --rc geninfo_all_blocks=1 00:23:10.421 --rc geninfo_unexecuted_blocks=1 00:23:10.421 00:23:10.421 ' 00:23:10.421 10:10:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:10.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.422 --rc genhtml_branch_coverage=1 00:23:10.422 --rc genhtml_function_coverage=1 00:23:10.422 --rc genhtml_legend=1 00:23:10.422 --rc geninfo_all_blocks=1 00:23:10.422 --rc geninfo_unexecuted_blocks=1 00:23:10.422 00:23:10.422 ' 00:23:10.422 10:10:08 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:10.422 10:10:08 -- nvmf/common.sh@7 -- # uname -s 00:23:10.422 10:10:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.422 10:10:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.422 10:10:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.422 10:10:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.422 10:10:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.422 10:10:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.422 10:10:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.422 10:10:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.422 10:10:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.422 10:10:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.422 10:10:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:23:10.422 10:10:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:23:10.422 10:10:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.422 10:10:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.422 10:10:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:10.422 10:10:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:10.422 10:10:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.422 10:10:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.422 10:10:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.422 10:10:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.422 10:10:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.422 10:10:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.422 10:10:08 -- paths/export.sh@5 -- # export PATH 00:23:10.422 10:10:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.422 10:10:08 -- nvmf/common.sh@46 -- # : 0 00:23:10.422 10:10:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:10.422 10:10:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:10.422 10:10:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:10.422 10:10:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.422 10:10:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.422 10:10:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:10.422 10:10:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:10.422 10:10:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:10.422 10:10:08 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:10.422 10:10:08 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:10.422 10:10:08 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:10.422 10:10:08 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:10.422 10:10:08 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:10.422 10:10:08 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:10.422 10:10:08 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:10.422 10:10:08 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:10.422 10:10:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:10.422 10:10:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.422 10:10:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:10.422 10:10:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:10.422 10:10:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:10.422 10:10:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.422 10:10:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.422 10:10:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.422 10:10:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:10.422 10:10:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:10.422 10:10:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:10.422 10:10:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:10.422 10:10:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:10.422 10:10:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:10.422 10:10:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.422 10:10:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.422 10:10:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:10.422 10:10:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:10.422 10:10:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:10.422 10:10:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:10.422 10:10:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:10.422 10:10:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.422 10:10:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:10.422 10:10:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:10.422 10:10:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:10.422 10:10:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:10.422 10:10:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:10.422 10:10:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:10.422 Cannot find device "nvmf_tgt_br" 00:23:10.422 10:10:08 -- nvmf/common.sh@154 -- # true 00:23:10.422 10:10:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:10.422 Cannot find device "nvmf_tgt_br2" 00:23:10.422 10:10:08 -- nvmf/common.sh@155 -- # true 00:23:10.422 10:10:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:10.422 10:10:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:10.422 Cannot find device "nvmf_tgt_br" 00:23:10.422 10:10:08 -- nvmf/common.sh@157 -- # true 00:23:10.422 10:10:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:10.422 Cannot find device "nvmf_tgt_br2" 00:23:10.422 10:10:08 -- nvmf/common.sh@158 -- # true 00:23:10.422 10:10:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:10.422 10:10:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:10.422 10:10:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:10.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:10.422 10:10:08 -- nvmf/common.sh@161 -- # true 00:23:10.422 10:10:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:10.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:10.422 10:10:08 -- nvmf/common.sh@162 -- # true 00:23:10.422 10:10:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:10.422 10:10:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:10.422 10:10:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:10.422 10:10:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:10.422 10:10:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:10.422 10:10:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:10.681 10:10:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:10.681 10:10:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:10.681 10:10:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:10.681 10:10:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:10.681 10:10:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:10.681 10:10:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:10.681 10:10:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:10.681 10:10:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:10.681 10:10:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:10.681 10:10:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:10.681 10:10:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:10.681 10:10:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:10.681 10:10:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:10.681 10:10:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:10.681 10:10:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:10.681 10:10:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:10.681 10:10:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:10.681 10:10:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:10.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:23:10.681 00:23:10.681 --- 10.0.0.2 ping statistics --- 00:23:10.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.681 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:23:10.681 10:10:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:10.681 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:10.681 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:23:10.681 00:23:10.681 --- 10.0.0.3 ping statistics --- 00:23:10.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.681 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:23:10.681 10:10:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:10.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:23:10.681 00:23:10.681 --- 10.0.0.1 ping statistics --- 00:23:10.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.682 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:23:10.682 10:10:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.682 10:10:09 -- nvmf/common.sh@421 -- # return 0 00:23:10.682 10:10:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:10.682 10:10:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.682 10:10:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:10.682 10:10:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:10.682 10:10:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.682 10:10:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:10.682 10:10:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:10.682 10:10:09 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:10.682 10:10:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:10.682 10:10:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:10.682 10:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:10.682 10:10:09 -- nvmf/common.sh@469 -- # nvmfpid=98267 00:23:10.682 10:10:09 -- nvmf/common.sh@470 -- # waitforlisten 98267 00:23:10.682 10:10:09 -- common/autotest_common.sh@829 -- # '[' -z 98267 ']' 00:23:10.682 10:10:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.682 10:10:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.682 10:10:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.682 10:10:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.682 10:10:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:10.682 10:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:10.682 [2024-12-16 10:10:09.263563] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:10.682 [2024-12-16 10:10:09.263672] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.940 [2024-12-16 10:10:09.405415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.940 [2024-12-16 10:10:09.465349] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:10.940 [2024-12-16 10:10:09.465549] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.940 [2024-12-16 10:10:09.465561] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.940 [2024-12-16 10:10:09.465570] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.940 [2024-12-16 10:10:09.465601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.940 10:10:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.940 10:10:09 -- common/autotest_common.sh@862 -- # return 0 00:23:10.940 10:10:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:10.940 10:10:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:10.940 10:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:10.940 10:10:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.940 10:10:09 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:10.940 10:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.940 10:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:11.199 10:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.199 10:10:09 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:11.199 10:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.199 10:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:11.199 10:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.199 10:10:09 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:11.199 10:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.199 10:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:11.199 [2024-12-16 10:10:09.665526] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.199 10:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.199 10:10:09 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:11.199 10:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.199 10:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:11.199 [2024-12-16 10:10:09.677645] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:11.199 10:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.199 10:10:09 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:11.199 10:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.199 10:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:11.199 null0 00:23:11.199 10:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.199 10:10:09 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:11.199 10:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.199 10:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:11.199 null1 00:23:11.199 10:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.200 10:10:09 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:11.200 10:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.200 10:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:11.200 null2 00:23:11.200 10:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.200 10:10:09 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:11.200 10:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.200 10:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:11.200 null3 00:23:11.200 10:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.200 10:10:09 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:11.200 10:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.200 10:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:11.200 10:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.200 10:10:09 -- host/mdns_discovery.sh@47 -- # hostpid=98298 00:23:11.200 10:10:09 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:11.200 10:10:09 -- host/mdns_discovery.sh@48 -- # waitforlisten 98298 /tmp/host.sock 00:23:11.200 10:10:09 -- common/autotest_common.sh@829 -- # '[' -z 98298 ']' 00:23:11.200 10:10:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:11.200 10:10:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.200 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:11.200 10:10:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:11.200 10:10:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.200 10:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:11.200 [2024-12-16 10:10:09.797534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:11.200 [2024-12-16 10:10:09.797633] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98298 ] 00:23:11.458 [2024-12-16 10:10:09.935873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.458 [2024-12-16 10:10:10.014310] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:11.458 [2024-12-16 10:10:10.014527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.394 10:10:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.395 10:10:10 -- common/autotest_common.sh@862 -- # return 0 00:23:12.395 10:10:10 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:12.395 10:10:10 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:12.395 10:10:10 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:12.395 10:10:10 -- host/mdns_discovery.sh@57 -- # avahipid=98328 00:23:12.395 10:10:10 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:12.395 10:10:10 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:12.395 10:10:10 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:12.395 Process 1059 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:12.395 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:12.395 Successfully dropped root privileges. 00:23:12.395 avahi-daemon 0.8 starting up. 00:23:12.395 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:12.395 Successfully called chroot(). 00:23:12.395 Successfully dropped remaining capabilities. 00:23:12.395 No service file found in /etc/avahi/services. 00:23:13.330 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:13.330 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:13.330 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:13.330 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:13.330 Network interface enumeration completed. 00:23:13.330 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:23:13.330 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:13.331 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:23:13.331 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:13.331 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 1513880561. 00:23:13.331 10:10:11 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:13.331 10:10:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.331 10:10:11 -- common/autotest_common.sh@10 -- # set +x 00:23:13.590 10:10:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.590 10:10:11 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:13.590 10:10:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.590 10:10:11 -- common/autotest_common.sh@10 -- # set +x 00:23:13.590 10:10:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.590 10:10:11 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:13.590 10:10:11 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:13.590 10:10:11 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.590 10:10:11 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:13.590 10:10:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.590 10:10:11 -- host/mdns_discovery.sh@68 -- # xargs 00:23:13.590 10:10:11 -- common/autotest_common.sh@10 -- # set +x 00:23:13.590 10:10:11 -- host/mdns_discovery.sh@68 -- # sort 00:23:13.590 10:10:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.590 10:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@64 -- # xargs 00:23:13.590 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@64 -- # sort 00:23:13.590 10:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:13.590 10:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.590 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:23:13.590 10:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:13.590 10:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.590 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@68 -- # sort 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@68 -- # xargs 00:23:13.590 10:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:13.590 10:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@64 -- # sort 00:23:13.590 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@64 -- # xargs 00:23:13.590 10:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:13.590 10:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.590 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:23:13.590 10:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.590 10:10:12 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@68 -- # sort 00:23:13.849 10:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.849 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@68 -- # xargs 00:23:13.849 10:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.849 [2024-12-16 10:10:12.268117] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:13.849 10:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@64 -- # sort 00:23:13.849 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@64 -- # xargs 00:23:13.849 10:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:13.849 10:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.849 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:23:13.849 [2024-12-16 10:10:12.330576] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.849 10:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:13.849 10:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.849 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:23:13.849 10:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:13.849 10:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.849 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:23:13.849 10:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:13.849 10:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.849 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:23:13.849 10:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:13.849 10:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.849 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:23:13.849 10:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:13.849 10:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.849 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:23:13.849 [2024-12-16 10:10:12.382527] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:13.849 10:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:13.849 10:10:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.849 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:23:13.849 [2024-12-16 10:10:12.394571] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:13.849 10:10:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=98389 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:13.849 10:10:12 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:14.801 [2024-12-16 10:10:13.168123] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:14.801 Established under name 'CDC' 00:23:15.084 [2024-12-16 10:10:13.568133] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:15.085 [2024-12-16 10:10:13.568161] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:15.085 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:15.085 cookie is 0 00:23:15.085 is_local: 1 00:23:15.085 our_own: 0 00:23:15.085 wide_area: 0 00:23:15.085 multicast: 1 00:23:15.085 cached: 1 00:23:15.085 [2024-12-16 10:10:13.668126] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:15.085 [2024-12-16 10:10:13.668150] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:15.085 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:15.085 cookie is 0 00:23:15.085 is_local: 1 00:23:15.085 our_own: 0 00:23:15.085 wide_area: 0 00:23:15.085 multicast: 1 00:23:15.085 cached: 1 00:23:16.019 [2024-12-16 10:10:14.579930] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:16.019 [2024-12-16 10:10:14.579957] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:16.019 [2024-12-16 10:10:14.579975] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:16.277 [2024-12-16 10:10:14.666036] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:16.277 [2024-12-16 10:10:14.679610] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:16.278 [2024-12-16 10:10:14.679628] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:16.278 [2024-12-16 10:10:14.679645] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:16.278 [2024-12-16 10:10:14.731226] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:16.278 [2024-12-16 10:10:14.731470] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:16.278 [2024-12-16 10:10:14.765673] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:16.278 [2024-12-16 10:10:14.820728] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:16.278 [2024-12-16 10:10:14.820814] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:18.809 10:10:17 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:18.809 10:10:17 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:18.809 10:10:17 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:18.809 10:10:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.809 10:10:17 -- host/mdns_discovery.sh@80 -- # sort 00:23:18.809 10:10:17 -- common/autotest_common.sh@10 -- # set +x 00:23:18.809 10:10:17 -- host/mdns_discovery.sh@80 -- # xargs 00:23:18.809 10:10:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@76 -- # sort 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:19.068 10:10:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.068 10:10:17 -- common/autotest_common.sh@10 -- # set +x 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@76 -- # xargs 00:23:19.068 10:10:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:19.068 10:10:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.068 10:10:17 -- common/autotest_common.sh@10 -- # set +x 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@68 -- # sort 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@68 -- # xargs 00:23:19.068 10:10:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.068 10:10:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.068 10:10:17 -- common/autotest_common.sh@10 -- # set +x 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@64 -- # sort 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@64 -- # xargs 00:23:19.068 10:10:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:19.068 10:10:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.068 10:10:17 -- common/autotest_common.sh@10 -- # set +x 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@72 -- # xargs 00:23:19.068 10:10:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:19.068 10:10:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.068 10:10:17 -- common/autotest_common.sh@10 -- # set +x 00:23:19.068 10:10:17 -- host/mdns_discovery.sh@72 -- # xargs 00:23:19.325 10:10:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.325 10:10:17 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:19.325 10:10:17 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:19.325 10:10:17 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:19.325 10:10:17 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:19.326 10:10:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.326 10:10:17 -- common/autotest_common.sh@10 -- # set +x 00:23:19.326 10:10:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.326 10:10:17 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:19.326 10:10:17 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:19.326 10:10:17 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:19.326 10:10:17 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:19.326 10:10:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.326 10:10:17 -- common/autotest_common.sh@10 -- # set +x 00:23:19.326 10:10:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.326 10:10:17 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:19.326 10:10:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.326 10:10:17 -- common/autotest_common.sh@10 -- # set +x 00:23:19.326 10:10:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.326 10:10:17 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:20.259 10:10:18 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:20.259 10:10:18 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.259 10:10:18 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:20.259 10:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.259 10:10:18 -- common/autotest_common.sh@10 -- # set +x 00:23:20.259 10:10:18 -- host/mdns_discovery.sh@64 -- # sort 00:23:20.259 10:10:18 -- host/mdns_discovery.sh@64 -- # xargs 00:23:20.259 10:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.259 10:10:18 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:20.259 10:10:18 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:20.259 10:10:18 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:20.259 10:10:18 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:20.259 10:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.259 10:10:18 -- common/autotest_common.sh@10 -- # set +x 00:23:20.517 10:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.517 10:10:18 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:20.517 10:10:18 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:20.517 10:10:18 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:20.517 10:10:18 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:20.517 10:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.517 10:10:18 -- common/autotest_common.sh@10 -- # set +x 00:23:20.517 [2024-12-16 10:10:18.930133] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:20.517 [2024-12-16 10:10:18.931087] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:20.517 [2024-12-16 10:10:18.931313] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:20.517 [2024-12-16 10:10:18.931415] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:20.517 [2024-12-16 10:10:18.931433] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:20.517 10:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.517 10:10:18 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:20.517 10:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.517 10:10:18 -- common/autotest_common.sh@10 -- # set +x 00:23:20.517 [2024-12-16 10:10:18.938018] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:20.517 [2024-12-16 10:10:18.939080] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:20.517 [2024-12-16 10:10:18.939160] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:20.517 10:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.517 10:10:18 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:20.517 [2024-12-16 10:10:19.070200] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:20.517 [2024-12-16 10:10:19.070451] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:20.517 [2024-12-16 10:10:19.133509] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:20.517 [2024-12-16 10:10:19.133529] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:20.517 [2024-12-16 10:10:19.133535] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:20.517 [2024-12-16 10:10:19.133551] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:20.517 [2024-12-16 10:10:19.133641] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:20.517 [2024-12-16 10:10:19.133650] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:20.517 [2024-12-16 10:10:19.133655] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:20.517 [2024-12-16 10:10:19.133667] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:20.775 [2024-12-16 10:10:19.179297] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:20.775 [2024-12-16 10:10:19.179314] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:20.775 [2024-12-16 10:10:19.180301] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:20.775 [2024-12-16 10:10:19.180317] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:21.341 10:10:19 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:21.341 10:10:19 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:21.341 10:10:19 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:21.341 10:10:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.341 10:10:19 -- common/autotest_common.sh@10 -- # set +x 00:23:21.341 10:10:19 -- host/mdns_discovery.sh@68 -- # sort 00:23:21.341 10:10:19 -- host/mdns_discovery.sh@68 -- # xargs 00:23:21.599 10:10:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@64 -- # sort 00:23:21.599 10:10:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.599 10:10:20 -- common/autotest_common.sh@10 -- # set +x 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@64 -- # xargs 00:23:21.599 10:10:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:21.599 10:10:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@72 -- # xargs 00:23:21.599 10:10:20 -- common/autotest_common.sh@10 -- # set +x 00:23:21.599 10:10:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:21.599 10:10:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:21.599 10:10:20 -- common/autotest_common.sh@10 -- # set +x 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@72 -- # xargs 00:23:21.599 10:10:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:21.599 10:10:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.599 10:10:20 -- common/autotest_common.sh@10 -- # set +x 00:23:21.599 10:10:20 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:21.599 10:10:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.860 10:10:20 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:21.860 10:10:20 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:21.860 10:10:20 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:21.860 10:10:20 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:21.860 10:10:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.860 10:10:20 -- common/autotest_common.sh@10 -- # set +x 00:23:21.860 [2024-12-16 10:10:20.251827] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:21.860 [2024-12-16 10:10:20.251860] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:21.860 [2024-12-16 10:10:20.251892] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:21.860 [2024-12-16 10:10:20.251905] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:21.860 [2024-12-16 10:10:20.251967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.860 [2024-12-16 10:10:20.251999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.860 [2024-12-16 10:10:20.252027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.860 [2024-12-16 10:10:20.252036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.860 [2024-12-16 10:10:20.252046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.860 [2024-12-16 10:10:20.252055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.860 [2024-12-16 10:10:20.252064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.860 [2024-12-16 10:10:20.252072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.860 [2024-12-16 10:10:20.252080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7caa0 is same with the state(5) to be set 00:23:21.860 10:10:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.860 10:10:20 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:21.860 10:10:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.860 10:10:20 -- common/autotest_common.sh@10 -- # set +x 00:23:21.860 [2024-12-16 10:10:20.258790] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:21.860 [2024-12-16 10:10:20.259569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.860 [2024-12-16 10:10:20.259602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.860 [2024-12-16 10:10:20.259615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.860 [2024-12-16 10:10:20.259639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.860 [2024-12-16 10:10:20.259648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.860 [2024-12-16 10:10:20.259657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.860 [2024-12-16 10:10:20.259667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.860 [2024-12-16 10:10:20.259675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.860 [2024-12-16 10:10:20.259683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e67760 is same with the state(5) to be set 00:23:21.860 [2024-12-16 10:10:20.259813] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:21.860 [2024-12-16 10:10:20.261921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7caa0 (9): Bad file descriptor 00:23:21.860 10:10:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.860 10:10:20 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:21.860 [2024-12-16 10:10:20.269521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e67760 (9): Bad file descriptor 00:23:21.860 [2024-12-16 10:10:20.271939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:21.860 [2024-12-16 10:10:20.272068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.860 [2024-12-16 10:10:20.272112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.860 [2024-12-16 10:10:20.272126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7caa0 with addr=10.0.0.2, port=4420 00:23:21.860 [2024-12-16 10:10:20.272136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7caa0 is same with the state(5) to be set 00:23:21.860 [2024-12-16 10:10:20.272151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7caa0 (9): Bad file descriptor 00:23:21.860 [2024-12-16 10:10:20.272164] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:21.860 [2024-12-16 10:10:20.272172] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:21.860 [2024-12-16 10:10:20.272182] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:21.860 [2024-12-16 10:10:20.272196] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.860 [2024-12-16 10:10:20.279532] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:21.860 [2024-12-16 10:10:20.279628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.860 [2024-12-16 10:10:20.279670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.860 [2024-12-16 10:10:20.279685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e67760 with addr=10.0.0.3, port=4420 00:23:21.860 [2024-12-16 10:10:20.279694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e67760 is same with the state(5) to be set 00:23:21.860 [2024-12-16 10:10:20.279708] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e67760 (9): Bad file descriptor 00:23:21.860 [2024-12-16 10:10:20.279721] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:21.860 [2024-12-16 10:10:20.279729] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:21.860 [2024-12-16 10:10:20.279738] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:21.860 [2024-12-16 10:10:20.279751] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.860 [2024-12-16 10:10:20.282009] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:21.860 [2024-12-16 10:10:20.282121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.860 [2024-12-16 10:10:20.282162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.860 [2024-12-16 10:10:20.282176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7caa0 with addr=10.0.0.2, port=4420 00:23:21.860 [2024-12-16 10:10:20.282185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7caa0 is same with the state(5) to be set 00:23:21.860 [2024-12-16 10:10:20.282199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7caa0 (9): Bad file descriptor 00:23:21.860 [2024-12-16 10:10:20.282212] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:21.860 [2024-12-16 10:10:20.282220] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:21.860 [2024-12-16 10:10:20.282228] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:21.860 [2024-12-16 10:10:20.282241] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.860 [2024-12-16 10:10:20.289597] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:21.860 [2024-12-16 10:10:20.289687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.860 [2024-12-16 10:10:20.289726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.860 [2024-12-16 10:10:20.289740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e67760 with addr=10.0.0.3, port=4420 00:23:21.860 [2024-12-16 10:10:20.289749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e67760 is same with the state(5) to be set 00:23:21.860 [2024-12-16 10:10:20.289762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e67760 (9): Bad file descriptor 00:23:21.860 [2024-12-16 10:10:20.289774] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:21.860 [2024-12-16 10:10:20.289782] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:21.860 [2024-12-16 10:10:20.289790] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:21.860 [2024-12-16 10:10:20.289802] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.860 [2024-12-16 10:10:20.292078] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:21.860 [2024-12-16 10:10:20.292163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.292202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.292216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7caa0 with addr=10.0.0.2, port=4420 00:23:21.861 [2024-12-16 10:10:20.292224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7caa0 is same with the state(5) to be set 00:23:21.861 [2024-12-16 10:10:20.292238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7caa0 (9): Bad file descriptor 00:23:21.861 [2024-12-16 10:10:20.292258] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:21.861 [2024-12-16 10:10:20.292267] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:21.861 [2024-12-16 10:10:20.292275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:21.861 [2024-12-16 10:10:20.292288] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.861 [2024-12-16 10:10:20.299645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:21.861 [2024-12-16 10:10:20.299734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.299789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.299803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e67760 with addr=10.0.0.3, port=4420 00:23:21.861 [2024-12-16 10:10:20.299812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e67760 is same with the state(5) to be set 00:23:21.861 [2024-12-16 10:10:20.299825] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e67760 (9): Bad file descriptor 00:23:21.861 [2024-12-16 10:10:20.299852] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:21.861 [2024-12-16 10:10:20.299860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:21.861 [2024-12-16 10:10:20.299868] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:21.861 [2024-12-16 10:10:20.299880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.861 [2024-12-16 10:10:20.302137] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:21.861 [2024-12-16 10:10:20.302223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.302263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.302278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7caa0 with addr=10.0.0.2, port=4420 00:23:21.861 [2024-12-16 10:10:20.302287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7caa0 is same with the state(5) to be set 00:23:21.861 [2024-12-16 10:10:20.302300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7caa0 (9): Bad file descriptor 00:23:21.861 [2024-12-16 10:10:20.302312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:21.861 [2024-12-16 10:10:20.302320] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:21.861 [2024-12-16 10:10:20.302328] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:21.861 [2024-12-16 10:10:20.302341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.861 [2024-12-16 10:10:20.309709] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:21.861 [2024-12-16 10:10:20.309959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.310005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.310020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e67760 with addr=10.0.0.3, port=4420 00:23:21.861 [2024-12-16 10:10:20.310031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e67760 is same with the state(5) to be set 00:23:21.861 [2024-12-16 10:10:20.310088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e67760 (9): Bad file descriptor 00:23:21.861 [2024-12-16 10:10:20.310104] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:21.861 [2024-12-16 10:10:20.310113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:21.861 [2024-12-16 10:10:20.310122] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:21.861 [2024-12-16 10:10:20.310137] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.861 [2024-12-16 10:10:20.312199] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:21.861 [2024-12-16 10:10:20.312287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.312327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.312341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7caa0 with addr=10.0.0.2, port=4420 00:23:21.861 [2024-12-16 10:10:20.312350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7caa0 is same with the state(5) to be set 00:23:21.861 [2024-12-16 10:10:20.312406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7caa0 (9): Bad file descriptor 00:23:21.861 [2024-12-16 10:10:20.312422] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:21.861 [2024-12-16 10:10:20.312430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:21.861 [2024-12-16 10:10:20.312438] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:21.861 [2024-12-16 10:10:20.312452] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.861 [2024-12-16 10:10:20.319920] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:21.861 [2024-12-16 10:10:20.320011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.320051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.320065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e67760 with addr=10.0.0.3, port=4420 00:23:21.861 [2024-12-16 10:10:20.320074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e67760 is same with the state(5) to be set 00:23:21.861 [2024-12-16 10:10:20.320102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e67760 (9): Bad file descriptor 00:23:21.861 [2024-12-16 10:10:20.320115] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:21.861 [2024-12-16 10:10:20.320122] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:21.861 [2024-12-16 10:10:20.320130] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:21.861 [2024-12-16 10:10:20.320143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.861 [2024-12-16 10:10:20.322260] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:21.861 [2024-12-16 10:10:20.322350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.322427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.322444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7caa0 with addr=10.0.0.2, port=4420 00:23:21.861 [2024-12-16 10:10:20.322453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7caa0 is same with the state(5) to be set 00:23:21.861 [2024-12-16 10:10:20.322468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7caa0 (9): Bad file descriptor 00:23:21.861 [2024-12-16 10:10:20.322481] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:21.861 [2024-12-16 10:10:20.322489] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:21.861 [2024-12-16 10:10:20.322497] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:21.861 [2024-12-16 10:10:20.322510] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.861 [2024-12-16 10:10:20.329983] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:21.861 [2024-12-16 10:10:20.330108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.330148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.330163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e67760 with addr=10.0.0.3, port=4420 00:23:21.861 [2024-12-16 10:10:20.330172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e67760 is same with the state(5) to be set 00:23:21.861 [2024-12-16 10:10:20.330186] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e67760 (9): Bad file descriptor 00:23:21.861 [2024-12-16 10:10:20.330199] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:21.861 [2024-12-16 10:10:20.330207] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:21.861 [2024-12-16 10:10:20.330215] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:21.861 [2024-12-16 10:10:20.330228] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.861 [2024-12-16 10:10:20.332324] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:21.861 [2024-12-16 10:10:20.332440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.332499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.332514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7caa0 with addr=10.0.0.2, port=4420 00:23:21.861 [2024-12-16 10:10:20.332524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7caa0 is same with the state(5) to be set 00:23:21.861 [2024-12-16 10:10:20.332539] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7caa0 (9): Bad file descriptor 00:23:21.861 [2024-12-16 10:10:20.332553] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:21.861 [2024-12-16 10:10:20.332562] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:21.861 [2024-12-16 10:10:20.332571] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:21.861 [2024-12-16 10:10:20.332585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.861 [2024-12-16 10:10:20.340047] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:21.861 [2024-12-16 10:10:20.340151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.340192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.861 [2024-12-16 10:10:20.340207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e67760 with addr=10.0.0.3, port=4420 00:23:21.861 [2024-12-16 10:10:20.340216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e67760 is same with the state(5) to be set 00:23:21.861 [2024-12-16 10:10:20.340230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e67760 (9): Bad file descriptor 00:23:21.861 [2024-12-16 10:10:20.340242] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:21.861 [2024-12-16 10:10:20.340250] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:21.862 [2024-12-16 10:10:20.340270] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:21.862 [2024-12-16 10:10:20.340283] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.862 [2024-12-16 10:10:20.342412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:21.862 [2024-12-16 10:10:20.342502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.342545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.342560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7caa0 with addr=10.0.0.2, port=4420 00:23:21.862 [2024-12-16 10:10:20.342585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7caa0 is same with the state(5) to be set 00:23:21.862 [2024-12-16 10:10:20.342600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7caa0 (9): Bad file descriptor 00:23:21.862 [2024-12-16 10:10:20.342614] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:21.862 [2024-12-16 10:10:20.342622] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:21.862 [2024-12-16 10:10:20.342631] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:21.862 [2024-12-16 10:10:20.342645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.862 [2024-12-16 10:10:20.350131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:21.862 [2024-12-16 10:10:20.350235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.350280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.350296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e67760 with addr=10.0.0.3, port=4420 00:23:21.862 [2024-12-16 10:10:20.350305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e67760 is same with the state(5) to be set 00:23:21.862 [2024-12-16 10:10:20.350321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e67760 (9): Bad file descriptor 00:23:21.862 [2024-12-16 10:10:20.350334] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:21.862 [2024-12-16 10:10:20.350343] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:21.862 [2024-12-16 10:10:20.350352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:21.862 [2024-12-16 10:10:20.350378] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.862 [2024-12-16 10:10:20.352462] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:21.862 [2024-12-16 10:10:20.352556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.352600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.352615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7caa0 with addr=10.0.0.2, port=4420 00:23:21.862 [2024-12-16 10:10:20.352626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7caa0 is same with the state(5) to be set 00:23:21.862 [2024-12-16 10:10:20.352641] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7caa0 (9): Bad file descriptor 00:23:21.862 [2024-12-16 10:10:20.352655] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:21.862 [2024-12-16 10:10:20.352664] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:21.862 [2024-12-16 10:10:20.352673] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:21.862 [2024-12-16 10:10:20.352687] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.862 [2024-12-16 10:10:20.360201] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:21.862 [2024-12-16 10:10:20.360489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.360538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.360554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e67760 with addr=10.0.0.3, port=4420 00:23:21.862 [2024-12-16 10:10:20.360565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e67760 is same with the state(5) to be set 00:23:21.862 [2024-12-16 10:10:20.360594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e67760 (9): Bad file descriptor 00:23:21.862 [2024-12-16 10:10:20.360610] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:21.862 [2024-12-16 10:10:20.360619] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:21.862 [2024-12-16 10:10:20.360628] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:21.862 [2024-12-16 10:10:20.360644] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.862 [2024-12-16 10:10:20.362513] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:21.862 [2024-12-16 10:10:20.362595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.362640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.362655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7caa0 with addr=10.0.0.2, port=4420 00:23:21.862 [2024-12-16 10:10:20.362666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7caa0 is same with the state(5) to be set 00:23:21.862 [2024-12-16 10:10:20.362681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7caa0 (9): Bad file descriptor 00:23:21.862 [2024-12-16 10:10:20.362710] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:21.862 [2024-12-16 10:10:20.362734] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:21.862 [2024-12-16 10:10:20.362742] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:21.862 [2024-12-16 10:10:20.362770] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.862 [2024-12-16 10:10:20.370440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:21.862 [2024-12-16 10:10:20.370533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.370574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.370589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e67760 with addr=10.0.0.3, port=4420 00:23:21.862 [2024-12-16 10:10:20.370599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e67760 is same with the state(5) to be set 00:23:21.862 [2024-12-16 10:10:20.370613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e67760 (9): Bad file descriptor 00:23:21.862 [2024-12-16 10:10:20.370626] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:21.862 [2024-12-16 10:10:20.370634] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:21.862 [2024-12-16 10:10:20.370642] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:21.862 [2024-12-16 10:10:20.370656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.862 [2024-12-16 10:10:20.372563] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:21.862 [2024-12-16 10:10:20.372652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.372693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.372707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7caa0 with addr=10.0.0.2, port=4420 00:23:21.862 [2024-12-16 10:10:20.372732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7caa0 is same with the state(5) to be set 00:23:21.862 [2024-12-16 10:10:20.372746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7caa0 (9): Bad file descriptor 00:23:21.862 [2024-12-16 10:10:20.372758] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:21.862 [2024-12-16 10:10:20.372765] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:21.862 [2024-12-16 10:10:20.372773] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:21.862 [2024-12-16 10:10:20.372786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.862 [2024-12-16 10:10:20.380489] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:21.862 [2024-12-16 10:10:20.380579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.380619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.380633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e67760 with addr=10.0.0.3, port=4420 00:23:21.862 [2024-12-16 10:10:20.380642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e67760 is same with the state(5) to be set 00:23:21.862 [2024-12-16 10:10:20.380655] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e67760 (9): Bad file descriptor 00:23:21.862 [2024-12-16 10:10:20.380667] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:21.862 [2024-12-16 10:10:20.380675] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:21.862 [2024-12-16 10:10:20.380683] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:21.862 [2024-12-16 10:10:20.380695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.862 [2024-12-16 10:10:20.382624] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:21.862 [2024-12-16 10:10:20.382718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.382786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.862 [2024-12-16 10:10:20.382800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e7caa0 with addr=10.0.0.2, port=4420 00:23:21.862 [2024-12-16 10:10:20.382809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7caa0 is same with the state(5) to be set 00:23:21.862 [2024-12-16 10:10:20.382822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7caa0 (9): Bad file descriptor 00:23:21.862 [2024-12-16 10:10:20.382834] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:21.862 [2024-12-16 10:10:20.382842] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:21.862 [2024-12-16 10:10:20.382849] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:21.862 [2024-12-16 10:10:20.382862] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.862 [2024-12-16 10:10:20.390091] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:21.862 [2024-12-16 10:10:20.390257] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:21.862 [2024-12-16 10:10:20.390283] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:21.863 [2024-12-16 10:10:20.391058] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:21.863 [2024-12-16 10:10:20.391076] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:21.863 [2024-12-16 10:10:20.391090] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:21.863 [2024-12-16 10:10:20.476166] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:21.863 [2024-12-16 10:10:20.477168] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:22.798 10:10:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@68 -- # sort 00:23:22.798 10:10:21 -- common/autotest_common.sh@10 -- # set +x 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@68 -- # xargs 00:23:22.798 10:10:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:22.798 10:10:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.798 10:10:21 -- common/autotest_common.sh@10 -- # set +x 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@64 -- # sort 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@64 -- # xargs 00:23:22.798 10:10:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:22.798 10:10:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.798 10:10:21 -- common/autotest_common.sh@10 -- # set +x 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@72 -- # xargs 00:23:22.798 10:10:21 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:22.798 10:10:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.056 10:10:21 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:23.056 10:10:21 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:23.056 10:10:21 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:23.056 10:10:21 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:23.056 10:10:21 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:23.056 10:10:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.056 10:10:21 -- common/autotest_common.sh@10 -- # set +x 00:23:23.056 10:10:21 -- host/mdns_discovery.sh@72 -- # xargs 00:23:23.056 10:10:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.056 10:10:21 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:23.056 10:10:21 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:23.056 10:10:21 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:23.056 10:10:21 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:23.056 10:10:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.056 10:10:21 -- common/autotest_common.sh@10 -- # set +x 00:23:23.056 10:10:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.056 10:10:21 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:23.056 10:10:21 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:23.056 10:10:21 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:23.056 10:10:21 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:23.056 10:10:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.056 10:10:21 -- common/autotest_common.sh@10 -- # set +x 00:23:23.056 10:10:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.056 10:10:21 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:23.056 [2024-12-16 10:10:21.568185] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:23.992 10:10:22 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:23.992 10:10:22 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:23.992 10:10:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.992 10:10:22 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:23.992 10:10:22 -- host/mdns_discovery.sh@80 -- # sort 00:23:23.992 10:10:22 -- common/autotest_common.sh@10 -- # set +x 00:23:23.992 10:10:22 -- host/mdns_discovery.sh@80 -- # xargs 00:23:23.992 10:10:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.992 10:10:22 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@68 -- # sort 00:23:24.251 10:10:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.251 10:10:22 -- common/autotest_common.sh@10 -- # set +x 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@68 -- # xargs 00:23:24.251 10:10:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.251 10:10:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:24.251 10:10:22 -- common/autotest_common.sh@10 -- # set +x 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@64 -- # sort 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@64 -- # xargs 00:23:24.251 10:10:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:24.251 10:10:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.251 10:10:22 -- common/autotest_common.sh@10 -- # set +x 00:23:24.251 10:10:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:24.251 10:10:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.251 10:10:22 -- common/autotest_common.sh@10 -- # set +x 00:23:24.251 10:10:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:24.251 10:10:22 -- common/autotest_common.sh@650 -- # local es=0 00:23:24.251 10:10:22 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:24.251 10:10:22 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:24.251 10:10:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:24.251 10:10:22 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:24.251 10:10:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:24.251 10:10:22 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:24.251 10:10:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.251 10:10:22 -- common/autotest_common.sh@10 -- # set +x 00:23:24.251 [2024-12-16 10:10:22.797192] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:24.251 2024/12/16 10:10:22 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:24.251 request: 00:23:24.251 { 00:23:24.251 "method": "bdev_nvme_start_mdns_discovery", 00:23:24.251 "params": { 00:23:24.251 "name": "mdns", 00:23:24.251 "svcname": "_nvme-disc._http", 00:23:24.251 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:24.251 } 00:23:24.251 } 00:23:24.251 Got JSON-RPC error response 00:23:24.251 GoRPCClient: error on JSON-RPC call 00:23:24.251 10:10:22 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:24.251 10:10:22 -- common/autotest_common.sh@653 -- # es=1 00:23:24.251 10:10:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:24.251 10:10:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:24.251 10:10:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:24.251 10:10:22 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:24.819 [2024-12-16 10:10:23.185827] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:24.819 [2024-12-16 10:10:23.285823] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:24.819 [2024-12-16 10:10:23.385827] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:24.819 [2024-12-16 10:10:23.385843] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:24.819 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:24.819 cookie is 0 00:23:24.819 is_local: 1 00:23:24.819 our_own: 0 00:23:24.819 wide_area: 0 00:23:24.819 multicast: 1 00:23:24.819 cached: 1 00:23:25.077 [2024-12-16 10:10:23.485828] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:25.077 [2024-12-16 10:10:23.485846] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:25.077 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:25.077 cookie is 0 00:23:25.077 is_local: 1 00:23:25.077 our_own: 0 00:23:25.077 wide_area: 0 00:23:25.077 multicast: 1 00:23:25.077 cached: 1 00:23:26.012 [2024-12-16 10:10:24.391698] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:26.012 [2024-12-16 10:10:24.391723] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:26.012 [2024-12-16 10:10:24.391740] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:26.012 [2024-12-16 10:10:24.477867] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:26.012 [2024-12-16 10:10:24.491570] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:26.012 [2024-12-16 10:10:24.491589] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:26.012 [2024-12-16 10:10:24.491602] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:26.012 [2024-12-16 10:10:24.540765] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:26.012 [2024-12-16 10:10:24.540789] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:26.012 [2024-12-16 10:10:24.578085] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:26.271 [2024-12-16 10:10:24.636809] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:26.271 [2024-12-16 10:10:24.636976] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:29.589 10:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@80 -- # sort 00:23:29.589 10:10:27 -- common/autotest_common.sh@10 -- # set +x 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@80 -- # xargs 00:23:29.589 10:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:29.589 10:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.589 10:10:27 -- common/autotest_common.sh@10 -- # set +x 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@76 -- # sort 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@76 -- # xargs 00:23:29.589 10:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.589 10:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.589 10:10:27 -- common/autotest_common.sh@10 -- # set +x 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@64 -- # sort 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@64 -- # xargs 00:23:29.589 10:10:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:29.589 10:10:27 -- common/autotest_common.sh@650 -- # local es=0 00:23:29.589 10:10:27 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:29.589 10:10:27 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:29.589 10:10:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:29.589 10:10:27 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:29.589 10:10:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:29.589 10:10:27 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:29.589 10:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.589 10:10:27 -- common/autotest_common.sh@10 -- # set +x 00:23:29.589 [2024-12-16 10:10:27.987613] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:29.589 2024/12/16 10:10:27 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:29.589 request: 00:23:29.589 { 00:23:29.589 "method": "bdev_nvme_start_mdns_discovery", 00:23:29.589 "params": { 00:23:29.589 "name": "cdc", 00:23:29.589 "svcname": "_nvme-disc._tcp", 00:23:29.589 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:29.589 } 00:23:29.589 } 00:23:29.589 Got JSON-RPC error response 00:23:29.589 GoRPCClient: error on JSON-RPC call 00:23:29.589 10:10:27 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:29.589 10:10:27 -- common/autotest_common.sh@653 -- # es=1 00:23:29.589 10:10:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:29.589 10:10:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:29.589 10:10:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@76 -- # sort 00:23:29.589 10:10:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.589 10:10:27 -- common/autotest_common.sh@10 -- # set +x 00:23:29.589 10:10:27 -- host/mdns_discovery.sh@76 -- # xargs 00:23:29.589 10:10:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.589 10:10:28 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:29.589 10:10:28 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:29.589 10:10:28 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.589 10:10:28 -- host/mdns_discovery.sh@64 -- # sort 00:23:29.589 10:10:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.589 10:10:28 -- common/autotest_common.sh@10 -- # set +x 00:23:29.589 10:10:28 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:29.589 10:10:28 -- host/mdns_discovery.sh@64 -- # xargs 00:23:29.589 10:10:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.589 10:10:28 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:29.589 10:10:28 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:29.589 10:10:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.589 10:10:28 -- common/autotest_common.sh@10 -- # set +x 00:23:29.589 10:10:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.589 10:10:28 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:29.589 10:10:28 -- host/mdns_discovery.sh@197 -- # kill 98298 00:23:29.589 10:10:28 -- host/mdns_discovery.sh@200 -- # wait 98298 00:23:29.848 [2024-12-16 10:10:28.222304] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:29.848 10:10:28 -- host/mdns_discovery.sh@201 -- # kill 98389 00:23:29.848 Got SIGTERM, quitting. 00:23:29.848 10:10:28 -- host/mdns_discovery.sh@202 -- # kill 98328 00:23:29.848 10:10:28 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:29.848 10:10:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:29.848 10:10:28 -- nvmf/common.sh@116 -- # sync 00:23:29.848 Got SIGTERM, quitting. 00:23:29.848 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:29.848 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:29.848 avahi-daemon 0.8 exiting. 00:23:29.848 10:10:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:29.848 10:10:28 -- nvmf/common.sh@119 -- # set +e 00:23:29.848 10:10:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:29.848 10:10:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:29.848 rmmod nvme_tcp 00:23:29.848 rmmod nvme_fabrics 00:23:29.848 rmmod nvme_keyring 00:23:29.848 10:10:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:29.848 10:10:28 -- nvmf/common.sh@123 -- # set -e 00:23:29.848 10:10:28 -- nvmf/common.sh@124 -- # return 0 00:23:29.848 10:10:28 -- nvmf/common.sh@477 -- # '[' -n 98267 ']' 00:23:29.848 10:10:28 -- nvmf/common.sh@478 -- # killprocess 98267 00:23:29.848 10:10:28 -- common/autotest_common.sh@936 -- # '[' -z 98267 ']' 00:23:29.848 10:10:28 -- common/autotest_common.sh@940 -- # kill -0 98267 00:23:29.848 10:10:28 -- common/autotest_common.sh@941 -- # uname 00:23:29.848 10:10:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:29.848 10:10:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98267 00:23:30.106 killing process with pid 98267 00:23:30.106 10:10:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:30.106 10:10:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:30.106 10:10:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98267' 00:23:30.106 10:10:28 -- common/autotest_common.sh@955 -- # kill 98267 00:23:30.106 10:10:28 -- common/autotest_common.sh@960 -- # wait 98267 00:23:30.106 10:10:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:30.106 10:10:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:30.106 10:10:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:30.106 10:10:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:30.106 10:10:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:30.106 10:10:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.106 10:10:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.106 10:10:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.365 10:10:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:30.365 00:23:30.365 real 0m20.055s 00:23:30.365 user 0m39.775s 00:23:30.365 sys 0m1.968s 00:23:30.365 10:10:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:30.365 10:10:28 -- common/autotest_common.sh@10 -- # set +x 00:23:30.365 ************************************ 00:23:30.365 END TEST nvmf_mdns_discovery 00:23:30.365 ************************************ 00:23:30.365 10:10:28 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:30.365 10:10:28 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:30.365 10:10:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:30.365 10:10:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:30.365 10:10:28 -- common/autotest_common.sh@10 -- # set +x 00:23:30.365 ************************************ 00:23:30.365 START TEST nvmf_multipath 00:23:30.365 ************************************ 00:23:30.365 10:10:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:30.365 * Looking for test storage... 00:23:30.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:30.365 10:10:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:30.365 10:10:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:30.365 10:10:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:30.365 10:10:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:30.365 10:10:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:30.365 10:10:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:30.365 10:10:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:30.365 10:10:28 -- scripts/common.sh@335 -- # IFS=.-: 00:23:30.365 10:10:28 -- scripts/common.sh@335 -- # read -ra ver1 00:23:30.365 10:10:28 -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.365 10:10:28 -- scripts/common.sh@336 -- # read -ra ver2 00:23:30.365 10:10:28 -- scripts/common.sh@337 -- # local 'op=<' 00:23:30.365 10:10:28 -- scripts/common.sh@339 -- # ver1_l=2 00:23:30.365 10:10:28 -- scripts/common.sh@340 -- # ver2_l=1 00:23:30.365 10:10:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:30.365 10:10:28 -- scripts/common.sh@343 -- # case "$op" in 00:23:30.365 10:10:28 -- scripts/common.sh@344 -- # : 1 00:23:30.365 10:10:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:30.365 10:10:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.365 10:10:28 -- scripts/common.sh@364 -- # decimal 1 00:23:30.365 10:10:28 -- scripts/common.sh@352 -- # local d=1 00:23:30.365 10:10:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.365 10:10:28 -- scripts/common.sh@354 -- # echo 1 00:23:30.365 10:10:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:30.365 10:10:28 -- scripts/common.sh@365 -- # decimal 2 00:23:30.365 10:10:28 -- scripts/common.sh@352 -- # local d=2 00:23:30.365 10:10:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.365 10:10:28 -- scripts/common.sh@354 -- # echo 2 00:23:30.365 10:10:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:30.365 10:10:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:30.365 10:10:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:30.365 10:10:28 -- scripts/common.sh@367 -- # return 0 00:23:30.365 10:10:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.365 10:10:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:30.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.365 --rc genhtml_branch_coverage=1 00:23:30.365 --rc genhtml_function_coverage=1 00:23:30.365 --rc genhtml_legend=1 00:23:30.365 --rc geninfo_all_blocks=1 00:23:30.365 --rc geninfo_unexecuted_blocks=1 00:23:30.365 00:23:30.365 ' 00:23:30.365 10:10:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:30.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.365 --rc genhtml_branch_coverage=1 00:23:30.365 --rc genhtml_function_coverage=1 00:23:30.365 --rc genhtml_legend=1 00:23:30.365 --rc geninfo_all_blocks=1 00:23:30.365 --rc geninfo_unexecuted_blocks=1 00:23:30.365 00:23:30.365 ' 00:23:30.366 10:10:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:30.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.366 --rc genhtml_branch_coverage=1 00:23:30.366 --rc genhtml_function_coverage=1 00:23:30.366 --rc genhtml_legend=1 00:23:30.366 --rc geninfo_all_blocks=1 00:23:30.366 --rc geninfo_unexecuted_blocks=1 00:23:30.366 00:23:30.366 ' 00:23:30.366 10:10:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:30.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.366 --rc genhtml_branch_coverage=1 00:23:30.366 --rc genhtml_function_coverage=1 00:23:30.366 --rc genhtml_legend=1 00:23:30.366 --rc geninfo_all_blocks=1 00:23:30.366 --rc geninfo_unexecuted_blocks=1 00:23:30.366 00:23:30.366 ' 00:23:30.366 10:10:28 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:30.366 10:10:28 -- nvmf/common.sh@7 -- # uname -s 00:23:30.366 10:10:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.366 10:10:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.366 10:10:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.366 10:10:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.366 10:10:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.366 10:10:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.366 10:10:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.366 10:10:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.366 10:10:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.366 10:10:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.366 10:10:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:23:30.366 10:10:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:23:30.366 10:10:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.366 10:10:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.366 10:10:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:30.366 10:10:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:30.366 10:10:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.366 10:10:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.366 10:10:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.366 10:10:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.366 10:10:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.625 10:10:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.625 10:10:28 -- paths/export.sh@5 -- # export PATH 00:23:30.625 10:10:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.625 10:10:28 -- nvmf/common.sh@46 -- # : 0 00:23:30.625 10:10:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:30.625 10:10:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:30.625 10:10:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:30.625 10:10:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.625 10:10:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.625 10:10:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:30.625 10:10:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:30.625 10:10:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:30.625 10:10:28 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:30.625 10:10:28 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:30.625 10:10:28 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:30.625 10:10:28 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:30.625 10:10:28 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:30.625 10:10:28 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:30.625 10:10:28 -- host/multipath.sh@30 -- # nvmftestinit 00:23:30.625 10:10:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:30.625 10:10:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.625 10:10:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:30.625 10:10:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:30.625 10:10:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:30.625 10:10:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.625 10:10:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.625 10:10:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.625 10:10:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:30.625 10:10:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:30.625 10:10:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:30.625 10:10:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:30.625 10:10:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:30.625 10:10:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:30.625 10:10:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.625 10:10:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.625 10:10:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:30.625 10:10:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:30.625 10:10:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:30.625 10:10:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:30.625 10:10:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:30.625 10:10:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.625 10:10:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:30.625 10:10:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:30.625 10:10:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:30.625 10:10:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:30.625 10:10:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:30.625 10:10:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:30.625 Cannot find device "nvmf_tgt_br" 00:23:30.625 10:10:29 -- nvmf/common.sh@154 -- # true 00:23:30.625 10:10:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:30.625 Cannot find device "nvmf_tgt_br2" 00:23:30.625 10:10:29 -- nvmf/common.sh@155 -- # true 00:23:30.625 10:10:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:30.625 10:10:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:30.625 Cannot find device "nvmf_tgt_br" 00:23:30.625 10:10:29 -- nvmf/common.sh@157 -- # true 00:23:30.625 10:10:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:30.625 Cannot find device "nvmf_tgt_br2" 00:23:30.625 10:10:29 -- nvmf/common.sh@158 -- # true 00:23:30.625 10:10:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:30.625 10:10:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:30.625 10:10:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:30.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.625 10:10:29 -- nvmf/common.sh@161 -- # true 00:23:30.625 10:10:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:30.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.625 10:10:29 -- nvmf/common.sh@162 -- # true 00:23:30.625 10:10:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:30.625 10:10:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:30.625 10:10:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:30.625 10:10:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:30.625 10:10:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:30.625 10:10:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:30.625 10:10:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:30.625 10:10:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:30.625 10:10:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:30.625 10:10:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:30.625 10:10:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:30.625 10:10:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:30.625 10:10:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:30.625 10:10:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:30.625 10:10:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:30.625 10:10:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:30.625 10:10:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:30.625 10:10:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:30.625 10:10:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:30.884 10:10:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:30.884 10:10:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:30.884 10:10:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:30.884 10:10:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:30.884 10:10:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:30.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:23:30.884 00:23:30.884 --- 10.0.0.2 ping statistics --- 00:23:30.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.884 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:23:30.884 10:10:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:30.884 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:30.884 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:23:30.884 00:23:30.884 --- 10.0.0.3 ping statistics --- 00:23:30.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.884 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:23:30.884 10:10:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:30.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:23:30.884 00:23:30.884 --- 10.0.0.1 ping statistics --- 00:23:30.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.884 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:23:30.884 10:10:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.884 10:10:29 -- nvmf/common.sh@421 -- # return 0 00:23:30.884 10:10:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:30.884 10:10:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.884 10:10:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:30.884 10:10:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:30.884 10:10:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.884 10:10:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:30.884 10:10:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:30.884 10:10:29 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:30.884 10:10:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:30.884 10:10:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:30.884 10:10:29 -- common/autotest_common.sh@10 -- # set +x 00:23:30.884 10:10:29 -- nvmf/common.sh@469 -- # nvmfpid=98899 00:23:30.884 10:10:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:30.884 10:10:29 -- nvmf/common.sh@470 -- # waitforlisten 98899 00:23:30.884 10:10:29 -- common/autotest_common.sh@829 -- # '[' -z 98899 ']' 00:23:30.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.884 10:10:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.884 10:10:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.884 10:10:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.884 10:10:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.884 10:10:29 -- common/autotest_common.sh@10 -- # set +x 00:23:30.884 [2024-12-16 10:10:29.390504] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:30.884 [2024-12-16 10:10:29.390608] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.143 [2024-12-16 10:10:29.532430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:31.143 [2024-12-16 10:10:29.620740] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:31.143 [2024-12-16 10:10:29.621167] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.143 [2024-12-16 10:10:29.621218] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.143 [2024-12-16 10:10:29.621344] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.143 [2024-12-16 10:10:29.621525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.143 [2024-12-16 10:10:29.621536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.078 10:10:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:32.078 10:10:30 -- common/autotest_common.sh@862 -- # return 0 00:23:32.078 10:10:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:32.078 10:10:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:32.078 10:10:30 -- common/autotest_common.sh@10 -- # set +x 00:23:32.078 10:10:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.078 10:10:30 -- host/multipath.sh@33 -- # nvmfapp_pid=98899 00:23:32.078 10:10:30 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:32.337 [2024-12-16 10:10:30.744000] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.337 10:10:30 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:32.596 Malloc0 00:23:32.596 10:10:31 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:32.854 10:10:31 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:33.113 10:10:31 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.371 [2024-12-16 10:10:31.798520] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.371 10:10:31 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:33.629 [2024-12-16 10:10:32.022646] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:33.629 10:10:32 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:33.629 10:10:32 -- host/multipath.sh@44 -- # bdevperf_pid=99008 00:23:33.629 10:10:32 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:33.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.629 10:10:32 -- host/multipath.sh@47 -- # waitforlisten 99008 /var/tmp/bdevperf.sock 00:23:33.629 10:10:32 -- common/autotest_common.sh@829 -- # '[' -z 99008 ']' 00:23:33.629 10:10:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.629 10:10:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.629 10:10:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.629 10:10:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.629 10:10:32 -- common/autotest_common.sh@10 -- # set +x 00:23:34.565 10:10:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.565 10:10:33 -- common/autotest_common.sh@862 -- # return 0 00:23:34.565 10:10:33 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:34.823 10:10:33 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:35.082 Nvme0n1 00:23:35.341 10:10:33 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:35.599 Nvme0n1 00:23:35.599 10:10:34 -- host/multipath.sh@78 -- # sleep 1 00:23:35.600 10:10:34 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:36.535 10:10:35 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:36.535 10:10:35 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:36.792 10:10:35 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:37.051 10:10:35 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:37.051 10:10:35 -- host/multipath.sh@65 -- # dtrace_pid=99094 00:23:37.051 10:10:35 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98899 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:37.051 10:10:35 -- host/multipath.sh@66 -- # sleep 6 00:23:43.615 10:10:41 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:43.616 10:10:41 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:43.616 10:10:41 -- host/multipath.sh@67 -- # active_port=4421 00:23:43.616 10:10:41 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:43.616 Attaching 4 probes... 00:23:43.616 @path[10.0.0.2, 4421]: 20995 00:23:43.616 @path[10.0.0.2, 4421]: 20570 00:23:43.616 @path[10.0.0.2, 4421]: 20510 00:23:43.616 @path[10.0.0.2, 4421]: 20149 00:23:43.616 @path[10.0.0.2, 4421]: 20401 00:23:43.616 10:10:41 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:43.616 10:10:41 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:43.616 10:10:41 -- host/multipath.sh@69 -- # sed -n 1p 00:23:43.616 10:10:41 -- host/multipath.sh@69 -- # port=4421 00:23:43.616 10:10:41 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:43.616 10:10:41 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:43.616 10:10:41 -- host/multipath.sh@72 -- # kill 99094 00:23:43.616 10:10:41 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:43.616 10:10:41 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:43.616 10:10:41 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:43.616 10:10:42 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:43.874 10:10:42 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:43.874 10:10:42 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98899 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:43.874 10:10:42 -- host/multipath.sh@65 -- # dtrace_pid=99227 00:23:43.874 10:10:42 -- host/multipath.sh@66 -- # sleep 6 00:23:50.439 10:10:48 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:50.439 10:10:48 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:50.439 10:10:48 -- host/multipath.sh@67 -- # active_port=4420 00:23:50.439 10:10:48 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:50.439 Attaching 4 probes... 00:23:50.439 @path[10.0.0.2, 4420]: 21419 00:23:50.439 @path[10.0.0.2, 4420]: 21737 00:23:50.439 @path[10.0.0.2, 4420]: 21864 00:23:50.439 @path[10.0.0.2, 4420]: 21840 00:23:50.439 @path[10.0.0.2, 4420]: 21596 00:23:50.439 10:10:48 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:50.439 10:10:48 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:50.439 10:10:48 -- host/multipath.sh@69 -- # sed -n 1p 00:23:50.439 10:10:48 -- host/multipath.sh@69 -- # port=4420 00:23:50.439 10:10:48 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:50.439 10:10:48 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:50.439 10:10:48 -- host/multipath.sh@72 -- # kill 99227 00:23:50.439 10:10:48 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:50.439 10:10:48 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:50.439 10:10:48 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:50.439 10:10:48 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:50.697 10:10:49 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:50.697 10:10:49 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98899 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:50.697 10:10:49 -- host/multipath.sh@65 -- # dtrace_pid=99363 00:23:50.697 10:10:49 -- host/multipath.sh@66 -- # sleep 6 00:23:57.262 10:10:55 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:57.262 10:10:55 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:57.262 10:10:55 -- host/multipath.sh@67 -- # active_port=4421 00:23:57.262 10:10:55 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:57.262 Attaching 4 probes... 00:23:57.262 @path[10.0.0.2, 4421]: 15545 00:23:57.262 @path[10.0.0.2, 4421]: 19444 00:23:57.262 @path[10.0.0.2, 4421]: 19421 00:23:57.262 @path[10.0.0.2, 4421]: 19577 00:23:57.262 @path[10.0.0.2, 4421]: 19380 00:23:57.262 10:10:55 -- host/multipath.sh@69 -- # sed -n 1p 00:23:57.262 10:10:55 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:57.262 10:10:55 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:57.262 10:10:55 -- host/multipath.sh@69 -- # port=4421 00:23:57.262 10:10:55 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:57.262 10:10:55 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:57.262 10:10:55 -- host/multipath.sh@72 -- # kill 99363 00:23:57.262 10:10:55 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:57.262 10:10:55 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:57.262 10:10:55 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:57.262 10:10:55 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:57.521 10:10:56 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:57.521 10:10:56 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98899 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:57.521 10:10:56 -- host/multipath.sh@65 -- # dtrace_pid=99489 00:23:57.521 10:10:56 -- host/multipath.sh@66 -- # sleep 6 00:24:04.084 10:11:02 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:04.084 10:11:02 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:04.084 10:11:02 -- host/multipath.sh@67 -- # active_port= 00:24:04.084 10:11:02 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:04.084 Attaching 4 probes... 00:24:04.084 00:24:04.084 00:24:04.084 00:24:04.084 00:24:04.084 00:24:04.084 10:11:02 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:04.084 10:11:02 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:04.084 10:11:02 -- host/multipath.sh@69 -- # sed -n 1p 00:24:04.084 10:11:02 -- host/multipath.sh@69 -- # port= 00:24:04.084 10:11:02 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:04.084 10:11:02 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:04.084 10:11:02 -- host/multipath.sh@72 -- # kill 99489 00:24:04.084 10:11:02 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:04.084 10:11:02 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:04.084 10:11:02 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:04.084 10:11:02 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:04.354 10:11:02 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:04.354 10:11:02 -- host/multipath.sh@65 -- # dtrace_pid=99625 00:24:04.354 10:11:02 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98899 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:04.354 10:11:02 -- host/multipath.sh@66 -- # sleep 6 00:24:10.976 10:11:08 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:10.976 10:11:08 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:10.976 10:11:09 -- host/multipath.sh@67 -- # active_port=4421 00:24:10.976 10:11:09 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:10.976 Attaching 4 probes... 00:24:10.976 @path[10.0.0.2, 4421]: 20158 00:24:10.976 @path[10.0.0.2, 4421]: 20505 00:24:10.976 @path[10.0.0.2, 4421]: 20709 00:24:10.976 @path[10.0.0.2, 4421]: 20587 00:24:10.976 @path[10.0.0.2, 4421]: 20722 00:24:10.976 10:11:09 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:10.976 10:11:09 -- host/multipath.sh@69 -- # sed -n 1p 00:24:10.976 10:11:09 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:10.976 10:11:09 -- host/multipath.sh@69 -- # port=4421 00:24:10.976 10:11:09 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:10.976 10:11:09 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:10.976 10:11:09 -- host/multipath.sh@72 -- # kill 99625 00:24:10.976 10:11:09 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:10.976 10:11:09 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:10.976 [2024-12-16 10:11:09.251209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251327] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251334] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251342] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251414] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251423] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251431] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251522] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251530] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251546] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251554] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251562] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251580] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251588] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.976 [2024-12-16 10:11:09.251614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.977 [2024-12-16 10:11:09.251623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.977 [2024-12-16 10:11:09.251631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.977 [2024-12-16 10:11:09.251639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.977 [2024-12-16 10:11:09.251647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.977 [2024-12-16 10:11:09.251655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.977 [2024-12-16 10:11:09.251663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.977 [2024-12-16 10:11:09.251671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.977 [2024-12-16 10:11:09.251679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.977 [2024-12-16 10:11:09.251688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f4370 is same with the state(5) to be set 00:24:10.977 10:11:09 -- host/multipath.sh@101 -- # sleep 1 00:24:11.913 10:11:10 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:11.913 10:11:10 -- host/multipath.sh@65 -- # dtrace_pid=99755 00:24:11.913 10:11:10 -- host/multipath.sh@66 -- # sleep 6 00:24:11.913 10:11:10 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98899 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:18.477 10:11:16 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:18.477 10:11:16 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:18.477 10:11:16 -- host/multipath.sh@67 -- # active_port=4420 00:24:18.477 10:11:16 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:18.477 Attaching 4 probes... 00:24:18.477 @path[10.0.0.2, 4420]: 20589 00:24:18.477 @path[10.0.0.2, 4420]: 20846 00:24:18.477 @path[10.0.0.2, 4420]: 20410 00:24:18.477 @path[10.0.0.2, 4420]: 19994 00:24:18.477 @path[10.0.0.2, 4420]: 20317 00:24:18.477 10:11:16 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:18.477 10:11:16 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:18.477 10:11:16 -- host/multipath.sh@69 -- # sed -n 1p 00:24:18.477 10:11:16 -- host/multipath.sh@69 -- # port=4420 00:24:18.477 10:11:16 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:18.477 10:11:16 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:18.477 10:11:16 -- host/multipath.sh@72 -- # kill 99755 00:24:18.477 10:11:16 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:18.478 10:11:16 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:18.478 [2024-12-16 10:11:16.873641] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:18.478 10:11:16 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:18.736 10:11:17 -- host/multipath.sh@111 -- # sleep 6 00:24:25.303 10:11:23 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:25.303 10:11:23 -- host/multipath.sh@65 -- # dtrace_pid=99952 00:24:25.303 10:11:23 -- host/multipath.sh@66 -- # sleep 6 00:24:25.303 10:11:23 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98899 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:30.571 10:11:29 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:30.571 10:11:29 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:31.147 10:11:29 -- host/multipath.sh@67 -- # active_port=4421 00:24:31.147 10:11:29 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:31.147 Attaching 4 probes... 00:24:31.147 @path[10.0.0.2, 4421]: 20451 00:24:31.147 @path[10.0.0.2, 4421]: 21231 00:24:31.147 @path[10.0.0.2, 4421]: 21416 00:24:31.147 @path[10.0.0.2, 4421]: 21388 00:24:31.147 @path[10.0.0.2, 4421]: 21315 00:24:31.147 10:11:29 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:31.147 10:11:29 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:31.147 10:11:29 -- host/multipath.sh@69 -- # sed -n 1p 00:24:31.147 10:11:29 -- host/multipath.sh@69 -- # port=4421 00:24:31.147 10:11:29 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:31.147 10:11:29 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:31.147 10:11:29 -- host/multipath.sh@72 -- # kill 99952 00:24:31.147 10:11:29 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:31.147 10:11:29 -- host/multipath.sh@114 -- # killprocess 99008 00:24:31.147 10:11:29 -- common/autotest_common.sh@936 -- # '[' -z 99008 ']' 00:24:31.147 10:11:29 -- common/autotest_common.sh@940 -- # kill -0 99008 00:24:31.147 10:11:29 -- common/autotest_common.sh@941 -- # uname 00:24:31.147 10:11:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:31.147 10:11:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99008 00:24:31.147 killing process with pid 99008 00:24:31.147 10:11:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:31.147 10:11:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:31.147 10:11:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99008' 00:24:31.147 10:11:29 -- common/autotest_common.sh@955 -- # kill 99008 00:24:31.147 10:11:29 -- common/autotest_common.sh@960 -- # wait 99008 00:24:31.147 Connection closed with partial response: 00:24:31.147 00:24:31.147 00:24:31.147 10:11:29 -- host/multipath.sh@116 -- # wait 99008 00:24:31.147 10:11:29 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:31.147 [2024-12-16 10:10:32.096809] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:31.147 [2024-12-16 10:10:32.096924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99008 ] 00:24:31.147 [2024-12-16 10:10:32.234965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.147 [2024-12-16 10:10:32.313748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.147 Running I/O for 90 seconds... 00:24:31.147 [2024-12-16 10:10:42.372056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.147 [2024-12-16 10:10:42.372122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:31.147 [2024-12-16 10:10:42.372170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.147 [2024-12-16 10:10:42.372188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:31.147 [2024-12-16 10:10:42.372207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.147 [2024-12-16 10:10:42.372221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:31.147 [2024-12-16 10:10:42.372240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.147 [2024-12-16 10:10:42.372253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.147 [2024-12-16 10:10:42.372271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.147 [2024-12-16 10:10:42.372284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.147 [2024-12-16 10:10:42.372302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.147 [2024-12-16 10:10:42.372315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:31.147 [2024-12-16 10:10:42.372333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.147 [2024-12-16 10:10:42.372358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:31.147 [2024-12-16 10:10:42.372424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.147 [2024-12-16 10:10:42.372450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.372469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.372483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.372517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.372530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.372565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.372580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.372623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.372638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.372659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.372673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.372694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.372708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.372743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.372757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.372791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.372804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.372823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.372836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.372856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.372885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.372905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.148 [2024-12-16 10:10:42.372918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.372937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.372950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.372981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.148 [2024-12-16 10:10:42.372994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.373025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.373083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.148 [2024-12-16 10:10:42.373126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.373186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.373220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.148 [2024-12-16 10:10:42.373254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.148 [2024-12-16 10:10:42.373305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.373341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.148 [2024-12-16 10:10:42.373407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.148 [2024-12-16 10:10:42.373442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.373499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.148 [2024-12-16 10:10:42.373551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.148 [2024-12-16 10:10:42.373604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.148 [2024-12-16 10:10:42.373642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.373687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.148 [2024-12-16 10:10:42.373725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.373787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.373821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.148 [2024-12-16 10:10:42.373855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.373913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.373972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.373992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.148 [2024-12-16 10:10:42.374006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.374027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.374041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.374090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.148 [2024-12-16 10:10:42.374112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.374133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.148 [2024-12-16 10:10:42.374148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.374169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.148 [2024-12-16 10:10:42.374184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:31.148 [2024-12-16 10:10:42.374206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.374229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.378509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.149 [2024-12-16 10:10:42.378547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.378577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.378594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.378616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.378631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.378652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.149 [2024-12-16 10:10:42.378666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.378687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.378701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.378737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.149 [2024-12-16 10:10:42.378765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.378784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.378797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.378817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.149 [2024-12-16 10:10:42.378831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.378866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.149 [2024-12-16 10:10:42.378891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.378910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.149 [2024-12-16 10:10:42.378924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.378955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.378969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.378989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.149 [2024-12-16 10:10:42.379365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.149 [2024-12-16 10:10:42.379398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.379969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.149 [2024-12-16 10:10:42.379983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:31.149 [2024-12-16 10:10:42.382334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.382395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.382456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.382490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.382523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.382564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.382597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.382630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.382662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.382698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.382743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.382787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.382819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.382862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.382895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.382927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.382960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.382979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.382992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.383011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.383024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.383059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.383072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.383414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.383454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.383483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.383500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.383537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.383567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.383597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.383613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.383663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.383677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.383698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.383713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.383745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.383760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.383781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.383796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.383817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.383831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.383853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.383868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.383889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.383904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.383924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.383954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.383974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.383988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.384023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.384038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.384057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.384071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.384091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.384129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.384149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.384163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.384182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.384195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.384214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.384228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.384247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.150 [2024-12-16 10:10:42.384260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.384281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.150 [2024-12-16 10:10:42.384295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.150 [2024-12-16 10:10:42.384314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.151 [2024-12-16 10:10:42.384328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.951746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.151 [2024-12-16 10:10:48.951822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.951910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.951946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.951969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.151 [2024-12-16 10:10:48.951984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.151 [2024-12-16 10:10:48.952019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.151 [2024-12-16 10:10:48.952055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.151 [2024-12-16 10:10:48.952114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.151 [2024-12-16 10:10:48.952152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.952187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.151 [2024-12-16 10:10:48.952236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.151 [2024-12-16 10:10:48.952270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.151 [2024-12-16 10:10:48.952558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.952602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.952641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.952678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.952714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.952751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.952802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.952868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.952918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.952953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.952974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.952989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.953010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.953025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.953046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.953061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.953082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.953096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.953117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.953132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.953153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.953167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.953189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.953203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.953226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.953255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.953276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.953290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.953311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.953325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.953352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.953383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.953404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.953419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.953455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.151 [2024-12-16 10:10:48.953471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:31.151 [2024-12-16 10:10:48.953492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.953507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.953529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.152 [2024-12-16 10:10:48.953544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.953565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.152 [2024-12-16 10:10:48.953580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.953601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.953615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.953637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.953651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.953672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.953686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.953707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.953722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.953743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.953757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.953778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.953793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.953814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.953852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.953875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.953890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.953941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.953957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.953977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.953991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.954011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.954025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.954045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.954087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.954111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.152 [2024-12-16 10:10:48.954126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.954153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.954168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.954191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.152 [2024-12-16 10:10:48.954206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.954227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.152 [2024-12-16 10:10:48.954257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.954279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.954293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.954314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.954329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.954350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.152 [2024-12-16 10:10:48.954371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.954407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.152 [2024-12-16 10:10:48.954425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.954929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.152 [2024-12-16 10:10:48.954956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.954984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.152 [2024-12-16 10:10:48.955000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.955025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.152 [2024-12-16 10:10:48.955040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.955081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.955110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.955135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.955149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.955173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.955187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.955213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.955228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.955261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.955276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.955301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.955315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.955340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.955353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.955394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.955428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.955485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.955502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.955528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.955543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.955568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.955583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.955608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.152 [2024-12-16 10:10:48.955623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.955648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.152 [2024-12-16 10:10:48.955663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:31.152 [2024-12-16 10:10:48.955688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.152 [2024-12-16 10:10:48.955703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.955743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.955757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.955796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.955810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.955834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.955848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.955871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.955885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.955925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.955939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.955965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.955980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.956144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.956186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.956269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.956351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.956877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.956918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.956959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.956986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.957000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.957027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.957042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.957068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.957090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.957118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.957133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.957160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.957175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.957201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.957216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.957242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.957257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.957283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.957298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.957325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.957340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.957379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.957397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.957430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.153 [2024-12-16 10:10:48.957445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:31.153 [2024-12-16 10:10:48.957473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.153 [2024-12-16 10:10:48.957488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:55.999178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:55.999266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:55.999316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:55.999412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:55.999452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:55.999487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:55.999521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:55.999556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:55.999590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:55.999625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:55.999659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:55.999694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:55.999743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:55.999804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:55.999835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:55.999878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:55.999896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:55.999910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.000002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:56.000025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.000049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:56.000064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.000084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:56.000097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.000117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:56.000130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.000151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.000165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.000872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.000895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.000918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.000932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.000954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.000967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.000989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.001003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.001025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:56.001039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.001071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.001086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.001108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.001122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.001144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.001158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.001180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.001193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.001215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.001229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.001250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:56.001264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.001286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.001299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.001321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.001334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.001357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.001402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.001437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.001455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.001480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.001495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.001520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.154 [2024-12-16 10:10:56.001536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:31.154 [2024-12-16 10:10:56.001559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.154 [2024-12-16 10:10:56.001582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.001606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.001621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.001645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.155 [2024-12-16 10:10:56.001660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.001824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.155 [2024-12-16 10:10:56.001846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.001873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.001888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.001911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.155 [2024-12-16 10:10:56.001925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.001948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.155 [2024-12-16 10:10:56.001963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.001986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.002000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.002036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.002103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.002143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.002184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.155 [2024-12-16 10:10:56.002235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.002277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.155 [2024-12-16 10:10:56.002319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.155 [2024-12-16 10:10:56.002386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.155 [2024-12-16 10:10:56.002429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.155 [2024-12-16 10:10:56.002470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.155 [2024-12-16 10:10:56.002510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.002551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.002591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.002631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.002671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.155 [2024-12-16 10:10:56.002740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.155 [2024-12-16 10:10:56.002792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.155 [2024-12-16 10:10:56.002841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.155 [2024-12-16 10:10:56.002878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.002915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.002952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.002975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.002988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.003012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.003026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.003049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.003062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.003085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.003098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.003122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.003135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.003158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.003172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.003195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.003209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.003232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.003246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.003275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.003289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.003313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.003326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.003350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.003363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:31.155 [2024-12-16 10:10:56.003419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.155 [2024-12-16 10:10:56.003447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.003474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.003488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.003513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.003527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.003552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.003566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.003591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.003614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.003640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.156 [2024-12-16 10:10:56.003655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.003680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.156 [2024-12-16 10:10:56.003695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.003720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.003734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.003788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.003801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.003824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.003845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.003870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.156 [2024-12-16 10:10:56.003883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.003906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.003920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.003943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.003956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.003980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.003993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.004017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.004031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.004053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.004067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.004090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.004104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.004127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.004140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.004163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.004176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.004199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.004212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:10:56.004235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:10:56.004253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:11:09.252015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:11:09.252080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:11:09.252107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:11:09.252122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:11:09.252138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:11:09.252151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:11:09.252169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:11:09.252182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:11:09.252196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:11:09.252209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:11:09.252224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:11:09.252236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:11:09.252251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:11:09.252263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:11:09.252278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:11:09.252305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:11:09.252319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:11:09.252332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:11:09.252346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:11:09.252374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:11:09.252405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:11:09.252418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:11:09.252432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:11:09.252459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:11:09.252479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:11:09.252492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:11:09.252517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:11:09.252532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.156 [2024-12-16 10:11:09.252546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.156 [2024-12-16 10:11:09.252559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.252574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.252587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.252603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.252617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.252632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.252645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.252660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.252674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.252689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.252718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.252746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.252788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.252801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.252813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.252832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.252844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.252857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.252870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.252883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.252895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.252908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.252920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.252940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.252952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.252965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.252977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.252990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.253002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.253028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.253054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.253079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.253103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.157 [2024-12-16 10:11:09.253129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.157 [2024-12-16 10:11:09.253154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.157 [2024-12-16 10:11:09.253179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.253204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.157 [2024-12-16 10:11:09.253229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.253260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.157 [2024-12-16 10:11:09.253286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.157 [2024-12-16 10:11:09.253313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.157 [2024-12-16 10:11:09.253338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.157 [2024-12-16 10:11:09.253380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.253425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.253465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.253494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.157 [2024-12-16 10:11:09.253523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.157 [2024-12-16 10:11:09.253551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.253580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.157 [2024-12-16 10:11:09.253608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.157 [2024-12-16 10:11:09.253636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.157 [2024-12-16 10:11:09.253672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.157 [2024-12-16 10:11:09.253716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.157 [2024-12-16 10:11:09.253759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.253799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.157 [2024-12-16 10:11:09.253812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.157 [2024-12-16 10:11:09.253823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.253838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.253849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.253863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.253875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.253888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.253899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.253912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.253924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.253937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.253949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.253962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.253974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.253988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.158 [2024-12-16 10:11:09.254000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.158 [2024-12-16 10:11:09.254056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.158 [2024-12-16 10:11:09.254119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.158 [2024-12-16 10:11:09.254447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.158 [2024-12-16 10:11:09.254507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.158 [2024-12-16 10:11:09.254622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.158 [2024-12-16 10:11:09.254650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.158 [2024-12-16 10:11:09.254698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.158 [2024-12-16 10:11:09.254767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.158 [2024-12-16 10:11:09.254834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.158 [2024-12-16 10:11:09.254872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.254967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.254980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.158 [2024-12-16 10:11:09.254999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.255014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.255026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.255040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.158 [2024-12-16 10:11:09.255052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.158 [2024-12-16 10:11:09.255066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.158 [2024-12-16 10:11:09.255078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.159 [2024-12-16 10:11:09.255130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.159 [2024-12-16 10:11:09.255419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.255982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.255996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.159 [2024-12-16 10:11:09.256014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.256029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18df060 is same with the state(5) to be set 00:24:31.159 [2024-12-16 10:11:09.256053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:31.159 [2024-12-16 10:11:09.256068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:31.159 [2024-12-16 10:11:09.256079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88424 len:8 PRP1 0x0 PRP2 0x0 00:24:31.159 [2024-12-16 10:11:09.256091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.256147] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18df060 was disconnected and freed. reset controller. 00:24:31.159 [2024-12-16 10:11:09.256244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.159 [2024-12-16 10:11:09.256267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.256282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.159 [2024-12-16 10:11:09.256294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.256307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.159 [2024-12-16 10:11:09.256319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.256332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.159 [2024-12-16 10:11:09.256344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.159 [2024-12-16 10:11:09.256355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f0a00 is same with the state(5) to be set 00:24:31.159 [2024-12-16 10:11:09.257798] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.159 [2024-12-16 10:11:09.257835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f0a00 (9): Bad file descriptor 00:24:31.159 [2024-12-16 10:11:09.257938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.159 [2024-12-16 10:11:09.257993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.159 [2024-12-16 10:11:09.258015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18f0a00 with addr=10.0.0.2, port=4421 00:24:31.159 [2024-12-16 10:11:09.258029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f0a00 is same with the state(5) to be set 00:24:31.159 [2024-12-16 10:11:09.258058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f0a00 (9): Bad file descriptor 00:24:31.159 [2024-12-16 10:11:09.258111] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.159 [2024-12-16 10:11:09.258126] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.160 [2024-12-16 10:11:09.258141] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.160 [2024-12-16 10:11:09.258178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.160 [2024-12-16 10:11:09.258192] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.160 [2024-12-16 10:11:19.316850] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:31.160 Received shutdown signal, test time was about 55.346597 seconds 00:24:31.160 00:24:31.160 Latency(us) 00:24:31.160 [2024-12-16T10:11:29.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.160 [2024-12-16T10:11:29.785Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:31.160 Verification LBA range: start 0x0 length 0x4000 00:24:31.160 Nvme0n1 : 55.35 11638.91 45.46 0.00 0.00 10980.45 426.36 7046430.72 00:24:31.160 [2024-12-16T10:11:29.785Z] =================================================================================================================== 00:24:31.160 [2024-12-16T10:11:29.785Z] Total : 11638.91 45.46 0.00 0.00 10980.45 426.36 7046430.72 00:24:31.160 10:11:29 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:31.418 10:11:29 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:31.419 10:11:29 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:31.419 10:11:29 -- host/multipath.sh@125 -- # nvmftestfini 00:24:31.419 10:11:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:31.419 10:11:29 -- nvmf/common.sh@116 -- # sync 00:24:31.419 10:11:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:31.419 10:11:30 -- nvmf/common.sh@119 -- # set +e 00:24:31.419 10:11:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:31.419 10:11:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:31.419 rmmod nvme_tcp 00:24:31.419 rmmod nvme_fabrics 00:24:31.677 rmmod nvme_keyring 00:24:31.677 10:11:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:31.677 10:11:30 -- nvmf/common.sh@123 -- # set -e 00:24:31.677 10:11:30 -- nvmf/common.sh@124 -- # return 0 00:24:31.677 10:11:30 -- nvmf/common.sh@477 -- # '[' -n 98899 ']' 00:24:31.677 10:11:30 -- nvmf/common.sh@478 -- # killprocess 98899 00:24:31.677 10:11:30 -- common/autotest_common.sh@936 -- # '[' -z 98899 ']' 00:24:31.677 10:11:30 -- common/autotest_common.sh@940 -- # kill -0 98899 00:24:31.677 10:11:30 -- common/autotest_common.sh@941 -- # uname 00:24:31.677 10:11:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:31.678 10:11:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98899 00:24:31.678 killing process with pid 98899 00:24:31.678 10:11:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:31.678 10:11:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:31.678 10:11:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98899' 00:24:31.678 10:11:30 -- common/autotest_common.sh@955 -- # kill 98899 00:24:31.678 10:11:30 -- common/autotest_common.sh@960 -- # wait 98899 00:24:31.936 10:11:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:31.936 10:11:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:31.936 10:11:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:31.937 10:11:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:31.937 10:11:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:31.937 10:11:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.937 10:11:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.937 10:11:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.937 10:11:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:31.937 00:24:31.937 real 1m1.563s 00:24:31.937 user 2m53.629s 00:24:31.937 sys 0m14.080s 00:24:31.937 10:11:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:31.937 10:11:30 -- common/autotest_common.sh@10 -- # set +x 00:24:31.937 ************************************ 00:24:31.937 END TEST nvmf_multipath 00:24:31.937 ************************************ 00:24:31.937 10:11:30 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:31.937 10:11:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:31.937 10:11:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:31.937 10:11:30 -- common/autotest_common.sh@10 -- # set +x 00:24:31.937 ************************************ 00:24:31.937 START TEST nvmf_timeout 00:24:31.937 ************************************ 00:24:31.937 10:11:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:31.937 * Looking for test storage... 00:24:31.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:31.937 10:11:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:31.937 10:11:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:31.937 10:11:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:32.196 10:11:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:32.196 10:11:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:32.196 10:11:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:32.196 10:11:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:32.196 10:11:30 -- scripts/common.sh@335 -- # IFS=.-: 00:24:32.196 10:11:30 -- scripts/common.sh@335 -- # read -ra ver1 00:24:32.196 10:11:30 -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.196 10:11:30 -- scripts/common.sh@336 -- # read -ra ver2 00:24:32.196 10:11:30 -- scripts/common.sh@337 -- # local 'op=<' 00:24:32.196 10:11:30 -- scripts/common.sh@339 -- # ver1_l=2 00:24:32.196 10:11:30 -- scripts/common.sh@340 -- # ver2_l=1 00:24:32.196 10:11:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:32.196 10:11:30 -- scripts/common.sh@343 -- # case "$op" in 00:24:32.196 10:11:30 -- scripts/common.sh@344 -- # : 1 00:24:32.196 10:11:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:32.196 10:11:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.196 10:11:30 -- scripts/common.sh@364 -- # decimal 1 00:24:32.196 10:11:30 -- scripts/common.sh@352 -- # local d=1 00:24:32.196 10:11:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.196 10:11:30 -- scripts/common.sh@354 -- # echo 1 00:24:32.196 10:11:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:32.196 10:11:30 -- scripts/common.sh@365 -- # decimal 2 00:24:32.196 10:11:30 -- scripts/common.sh@352 -- # local d=2 00:24:32.196 10:11:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.196 10:11:30 -- scripts/common.sh@354 -- # echo 2 00:24:32.196 10:11:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:32.196 10:11:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:32.196 10:11:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:32.196 10:11:30 -- scripts/common.sh@367 -- # return 0 00:24:32.196 10:11:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.196 10:11:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:32.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.196 --rc genhtml_branch_coverage=1 00:24:32.196 --rc genhtml_function_coverage=1 00:24:32.196 --rc genhtml_legend=1 00:24:32.196 --rc geninfo_all_blocks=1 00:24:32.196 --rc geninfo_unexecuted_blocks=1 00:24:32.196 00:24:32.196 ' 00:24:32.196 10:11:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:32.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.196 --rc genhtml_branch_coverage=1 00:24:32.196 --rc genhtml_function_coverage=1 00:24:32.196 --rc genhtml_legend=1 00:24:32.196 --rc geninfo_all_blocks=1 00:24:32.196 --rc geninfo_unexecuted_blocks=1 00:24:32.196 00:24:32.196 ' 00:24:32.196 10:11:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:32.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.196 --rc genhtml_branch_coverage=1 00:24:32.196 --rc genhtml_function_coverage=1 00:24:32.196 --rc genhtml_legend=1 00:24:32.196 --rc geninfo_all_blocks=1 00:24:32.196 --rc geninfo_unexecuted_blocks=1 00:24:32.196 00:24:32.196 ' 00:24:32.196 10:11:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:32.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.196 --rc genhtml_branch_coverage=1 00:24:32.196 --rc genhtml_function_coverage=1 00:24:32.196 --rc genhtml_legend=1 00:24:32.196 --rc geninfo_all_blocks=1 00:24:32.196 --rc geninfo_unexecuted_blocks=1 00:24:32.196 00:24:32.196 ' 00:24:32.196 10:11:30 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:32.196 10:11:30 -- nvmf/common.sh@7 -- # uname -s 00:24:32.196 10:11:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.196 10:11:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.196 10:11:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.196 10:11:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.196 10:11:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.196 10:11:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.196 10:11:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.196 10:11:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.196 10:11:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.196 10:11:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.196 10:11:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:24:32.196 10:11:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:24:32.196 10:11:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.196 10:11:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.196 10:11:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:32.196 10:11:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:32.196 10:11:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.196 10:11:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.196 10:11:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.196 10:11:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.196 10:11:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.196 10:11:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.196 10:11:30 -- paths/export.sh@5 -- # export PATH 00:24:32.197 10:11:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.197 10:11:30 -- nvmf/common.sh@46 -- # : 0 00:24:32.197 10:11:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:32.197 10:11:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:32.197 10:11:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:32.197 10:11:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.197 10:11:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.197 10:11:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:32.197 10:11:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:32.197 10:11:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:32.197 10:11:30 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:32.197 10:11:30 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:32.197 10:11:30 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:32.197 10:11:30 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:32.197 10:11:30 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:32.197 10:11:30 -- host/timeout.sh@19 -- # nvmftestinit 00:24:32.197 10:11:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:32.197 10:11:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.197 10:11:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:32.197 10:11:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:32.197 10:11:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:32.197 10:11:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.197 10:11:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.197 10:11:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.197 10:11:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:32.197 10:11:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:32.197 10:11:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:32.197 10:11:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:32.197 10:11:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:32.197 10:11:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:32.197 10:11:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.197 10:11:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.197 10:11:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:32.197 10:11:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:32.197 10:11:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:32.197 10:11:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:32.197 10:11:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:32.197 10:11:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.197 10:11:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:32.197 10:11:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:32.197 10:11:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:32.197 10:11:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:32.197 10:11:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:32.197 10:11:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:32.197 Cannot find device "nvmf_tgt_br" 00:24:32.197 10:11:30 -- nvmf/common.sh@154 -- # true 00:24:32.197 10:11:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:32.197 Cannot find device "nvmf_tgt_br2" 00:24:32.197 10:11:30 -- nvmf/common.sh@155 -- # true 00:24:32.197 10:11:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:32.197 10:11:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:32.197 Cannot find device "nvmf_tgt_br" 00:24:32.197 10:11:30 -- nvmf/common.sh@157 -- # true 00:24:32.197 10:11:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:32.197 Cannot find device "nvmf_tgt_br2" 00:24:32.197 10:11:30 -- nvmf/common.sh@158 -- # true 00:24:32.197 10:11:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:32.197 10:11:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:32.197 10:11:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:32.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:32.197 10:11:30 -- nvmf/common.sh@161 -- # true 00:24:32.197 10:11:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:32.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:32.197 10:11:30 -- nvmf/common.sh@162 -- # true 00:24:32.197 10:11:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:32.197 10:11:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:32.197 10:11:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:32.197 10:11:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:32.197 10:11:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:32.197 10:11:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:32.197 10:11:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:32.197 10:11:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:32.197 10:11:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:32.456 10:11:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:32.456 10:11:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:32.456 10:11:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:32.456 10:11:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:32.456 10:11:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:32.456 10:11:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:32.456 10:11:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:32.456 10:11:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:32.456 10:11:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:32.456 10:11:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:32.456 10:11:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:32.456 10:11:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:32.456 10:11:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:32.456 10:11:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:32.456 10:11:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:32.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:24:32.456 00:24:32.456 --- 10.0.0.2 ping statistics --- 00:24:32.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.456 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:24:32.456 10:11:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:32.456 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:32.456 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:24:32.456 00:24:32.456 --- 10.0.0.3 ping statistics --- 00:24:32.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.456 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:32.456 10:11:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:32.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:24:32.456 00:24:32.456 --- 10.0.0.1 ping statistics --- 00:24:32.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.456 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:24:32.456 10:11:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.456 10:11:30 -- nvmf/common.sh@421 -- # return 0 00:24:32.456 10:11:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:32.456 10:11:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.456 10:11:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:32.456 10:11:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:32.456 10:11:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.456 10:11:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:32.456 10:11:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:32.456 10:11:30 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:32.456 10:11:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:32.456 10:11:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:32.456 10:11:30 -- common/autotest_common.sh@10 -- # set +x 00:24:32.456 10:11:30 -- nvmf/common.sh@469 -- # nvmfpid=100283 00:24:32.456 10:11:30 -- nvmf/common.sh@470 -- # waitforlisten 100283 00:24:32.456 10:11:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:32.456 10:11:30 -- common/autotest_common.sh@829 -- # '[' -z 100283 ']' 00:24:32.456 10:11:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.456 10:11:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:32.456 10:11:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.456 10:11:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:32.456 10:11:30 -- common/autotest_common.sh@10 -- # set +x 00:24:32.456 [2024-12-16 10:11:31.031992] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:32.456 [2024-12-16 10:11:31.032080] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.715 [2024-12-16 10:11:31.167300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:32.715 [2024-12-16 10:11:31.233153] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:32.715 [2024-12-16 10:11:31.233632] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.715 [2024-12-16 10:11:31.233781] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.715 [2024-12-16 10:11:31.233987] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.715 [2024-12-16 10:11:31.234342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.715 [2024-12-16 10:11:31.234348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.652 10:11:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:33.652 10:11:31 -- common/autotest_common.sh@862 -- # return 0 00:24:33.652 10:11:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:33.652 10:11:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:33.652 10:11:31 -- common/autotest_common.sh@10 -- # set +x 00:24:33.652 10:11:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.652 10:11:32 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:33.652 10:11:32 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:33.652 [2024-12-16 10:11:32.223351] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.652 10:11:32 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:33.911 Malloc0 00:24:33.911 10:11:32 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:34.169 10:11:32 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:34.428 10:11:32 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.687 [2024-12-16 10:11:33.112462] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:34.687 10:11:33 -- host/timeout.sh@32 -- # bdevperf_pid=100374 00:24:34.687 10:11:33 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:34.687 10:11:33 -- host/timeout.sh@34 -- # waitforlisten 100374 /var/tmp/bdevperf.sock 00:24:34.687 10:11:33 -- common/autotest_common.sh@829 -- # '[' -z 100374 ']' 00:24:34.687 10:11:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:34.687 10:11:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.687 10:11:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:34.687 10:11:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.687 10:11:33 -- common/autotest_common.sh@10 -- # set +x 00:24:34.687 [2024-12-16 10:11:33.169458] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:34.687 [2024-12-16 10:11:33.169546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100374 ] 00:24:34.687 [2024-12-16 10:11:33.298849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.946 [2024-12-16 10:11:33.358688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.890 10:11:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:35.890 10:11:34 -- common/autotest_common.sh@862 -- # return 0 00:24:35.890 10:11:34 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:35.890 10:11:34 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:36.149 NVMe0n1 00:24:36.149 10:11:34 -- host/timeout.sh@51 -- # rpc_pid=100416 00:24:36.149 10:11:34 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:36.149 10:11:34 -- host/timeout.sh@53 -- # sleep 1 00:24:36.408 Running I/O for 10 seconds... 00:24:37.342 10:11:35 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.604 [2024-12-16 10:11:36.016821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.016889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.016918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.016926] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.016934] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.016942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.016950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.016957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.016964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.016971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.016979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.016986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.016993] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017095] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017132] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017180] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017230] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017246] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017254] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017262] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017303] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017327] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017342] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017407] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017415] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017423] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017443] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd490 is same with the state(5) to be set 00:24:37.604 [2024-12-16 10:11:36.017893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.604 [2024-12-16 10:11:36.017934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.604 [2024-12-16 10:11:36.017956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.604 [2024-12-16 10:11:36.017967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.604 [2024-12-16 10:11:36.017978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.017987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.017999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.605 [2024-12-16 10:11:36.018204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.605 [2024-12-16 10:11:36.018224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.605 [2024-12-16 10:11:36.018264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.605 [2024-12-16 10:11:36.018522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.605 [2024-12-16 10:11:36.018542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.605 [2024-12-16 10:11:36.018603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.605 [2024-12-16 10:11:36.018623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.605 [2024-12-16 10:11:36.018643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.605 [2024-12-16 10:11:36.018663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.605 [2024-12-16 10:11:36.018718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.605 [2024-12-16 10:11:36.018737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.605 [2024-12-16 10:11:36.018756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.605 [2024-12-16 10:11:36.018776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.605 [2024-12-16 10:11:36.018814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.605 [2024-12-16 10:11:36.018824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.605 [2024-12-16 10:11:36.018833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.018843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.606 [2024-12-16 10:11:36.018853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.018864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.606 [2024-12-16 10:11:36.018872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.018883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.606 [2024-12-16 10:11:36.018892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.018902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.606 [2024-12-16 10:11:36.018911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.018922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.018930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.018941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.606 [2024-12-16 10:11:36.018950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.018961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.018971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.018982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.018991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.606 [2024-12-16 10:11:36.019048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.606 [2024-12-16 10:11:36.019068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.606 [2024-12-16 10:11:36.019087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.606 [2024-12-16 10:11:36.019125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.606 [2024-12-16 10:11:36.019543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.606 [2024-12-16 10:11:36.019563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.606 [2024-12-16 10:11:36.019582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.606 [2024-12-16 10:11:36.019602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.606 [2024-12-16 10:11:36.019622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.606 [2024-12-16 10:11:36.019633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.606 [2024-12-16 10:11:36.019643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.019663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.019683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.019704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.019723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.019743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.019763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.019782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.019807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.019827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.019847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.019867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.019887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.019907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.019927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.019947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.019968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.019987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.019998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.020007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.020028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.020048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.020068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.020088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.020108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.020132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.020153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.020173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.020192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.020212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.020237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.020258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.020277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.020297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.020317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.020337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.020368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.020389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.020409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.607 [2024-12-16 10:11:36.020429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.607 [2024-12-16 10:11:36.020449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.607 [2024-12-16 10:11:36.020465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.608 [2024-12-16 10:11:36.020474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.608 [2024-12-16 10:11:36.020485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.608 [2024-12-16 10:11:36.020495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.608 [2024-12-16 10:11:36.020506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.608 [2024-12-16 10:11:36.020514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.608 [2024-12-16 10:11:36.020526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.608 [2024-12-16 10:11:36.020535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.608 [2024-12-16 10:11:36.020546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.608 [2024-12-16 10:11:36.020555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.608 [2024-12-16 10:11:36.020571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.608 [2024-12-16 10:11:36.020580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.608 [2024-12-16 10:11:36.020591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.608 [2024-12-16 10:11:36.020601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.608 [2024-12-16 10:11:36.020611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1597780 is same with the state(5) to be set 00:24:37.608 [2024-12-16 10:11:36.020623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:37.608 [2024-12-16 10:11:36.020631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:37.608 [2024-12-16 10:11:36.020639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:8 PRP1 0x0 PRP2 0x0 00:24:37.608 [2024-12-16 10:11:36.020648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.608 [2024-12-16 10:11:36.020703] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1597780 was disconnected and freed. reset controller. 00:24:37.608 [2024-12-16 10:11:36.020796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.608 [2024-12-16 10:11:36.020813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.608 [2024-12-16 10:11:36.020824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.608 [2024-12-16 10:11:36.020833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.608 [2024-12-16 10:11:36.020842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.608 [2024-12-16 10:11:36.020851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.608 [2024-12-16 10:11:36.020861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.608 [2024-12-16 10:11:36.020870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.608 [2024-12-16 10:11:36.020878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15128c0 is same with the state(5) to be set 00:24:37.608 [2024-12-16 10:11:36.021094] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.608 [2024-12-16 10:11:36.021128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15128c0 (9): Bad file descriptor 00:24:37.608 [2024-12-16 10:11:36.021248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.608 [2024-12-16 10:11:36.021299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.608 [2024-12-16 10:11:36.021322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15128c0 with addr=10.0.0.2, port=4420 00:24:37.608 [2024-12-16 10:11:36.021333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15128c0 is same with the state(5) to be set 00:24:37.608 [2024-12-16 10:11:36.021365] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15128c0 (9): Bad file descriptor 00:24:37.608 [2024-12-16 10:11:36.021385] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.608 [2024-12-16 10:11:36.021395] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.608 [2024-12-16 10:11:36.021405] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.608 [2024-12-16 10:11:36.031604] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.608 [2024-12-16 10:11:36.031658] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.608 10:11:36 -- host/timeout.sh@56 -- # sleep 2 00:24:39.511 [2024-12-16 10:11:38.031741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.511 [2024-12-16 10:11:38.031848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.511 [2024-12-16 10:11:38.031866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15128c0 with addr=10.0.0.2, port=4420 00:24:39.511 [2024-12-16 10:11:38.031877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15128c0 is same with the state(5) to be set 00:24:39.511 [2024-12-16 10:11:38.031898] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15128c0 (9): Bad file descriptor 00:24:39.511 [2024-12-16 10:11:38.031913] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.511 [2024-12-16 10:11:38.031922] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.511 [2024-12-16 10:11:38.031931] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.511 [2024-12-16 10:11:38.031951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.511 [2024-12-16 10:11:38.031961] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.511 10:11:38 -- host/timeout.sh@57 -- # get_controller 00:24:39.511 10:11:38 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:39.511 10:11:38 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:39.770 10:11:38 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:39.770 10:11:38 -- host/timeout.sh@58 -- # get_bdev 00:24:39.770 10:11:38 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:39.770 10:11:38 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:40.028 10:11:38 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:40.028 10:11:38 -- host/timeout.sh@61 -- # sleep 5 00:24:41.931 [2024-12-16 10:11:40.032068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.931 [2024-12-16 10:11:40.032184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.931 [2024-12-16 10:11:40.032203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15128c0 with addr=10.0.0.2, port=4420 00:24:41.931 [2024-12-16 10:11:40.032215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15128c0 is same with the state(5) to be set 00:24:41.931 [2024-12-16 10:11:40.032240] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15128c0 (9): Bad file descriptor 00:24:41.932 [2024-12-16 10:11:40.032258] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.932 [2024-12-16 10:11:40.032267] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.932 [2024-12-16 10:11:40.032276] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.932 [2024-12-16 10:11:40.032302] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.932 [2024-12-16 10:11:40.032313] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.834 [2024-12-16 10:11:42.032335] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.834 [2024-12-16 10:11:42.032411] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.834 [2024-12-16 10:11:42.032439] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.834 [2024-12-16 10:11:42.032448] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:43.834 [2024-12-16 10:11:42.032473] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.401 00:24:44.402 Latency(us) 00:24:44.402 [2024-12-16T10:11:43.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.402 [2024-12-16T10:11:43.027Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:44.402 Verification LBA range: start 0x0 length 0x4000 00:24:44.402 NVMe0n1 : 8.13 2094.60 8.18 15.74 0.00 60576.50 2353.34 7015926.69 00:24:44.402 [2024-12-16T10:11:43.027Z] =================================================================================================================== 00:24:44.402 [2024-12-16T10:11:43.027Z] Total : 2094.60 8.18 15.74 0.00 60576.50 2353.34 7015926.69 00:24:44.660 0 00:24:45.227 10:11:43 -- host/timeout.sh@62 -- # get_controller 00:24:45.227 10:11:43 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:45.227 10:11:43 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:45.227 10:11:43 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:45.227 10:11:43 -- host/timeout.sh@63 -- # get_bdev 00:24:45.227 10:11:43 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:45.227 10:11:43 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:45.486 10:11:44 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:45.486 10:11:44 -- host/timeout.sh@65 -- # wait 100416 00:24:45.486 10:11:44 -- host/timeout.sh@67 -- # killprocess 100374 00:24:45.486 10:11:44 -- common/autotest_common.sh@936 -- # '[' -z 100374 ']' 00:24:45.486 10:11:44 -- common/autotest_common.sh@940 -- # kill -0 100374 00:24:45.486 10:11:44 -- common/autotest_common.sh@941 -- # uname 00:24:45.486 10:11:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:45.486 10:11:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100374 00:24:45.486 killing process with pid 100374 00:24:45.486 Received shutdown signal, test time was about 9.196387 seconds 00:24:45.486 00:24:45.486 Latency(us) 00:24:45.486 [2024-12-16T10:11:44.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.486 [2024-12-16T10:11:44.111Z] =================================================================================================================== 00:24:45.486 [2024-12-16T10:11:44.111Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.486 10:11:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:45.486 10:11:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:45.486 10:11:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100374' 00:24:45.486 10:11:44 -- common/autotest_common.sh@955 -- # kill 100374 00:24:45.486 10:11:44 -- common/autotest_common.sh@960 -- # wait 100374 00:24:45.745 10:11:44 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.003 [2024-12-16 10:11:44.473228] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.003 10:11:44 -- host/timeout.sh@74 -- # bdevperf_pid=100575 00:24:46.003 10:11:44 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:46.003 10:11:44 -- host/timeout.sh@76 -- # waitforlisten 100575 /var/tmp/bdevperf.sock 00:24:46.003 10:11:44 -- common/autotest_common.sh@829 -- # '[' -z 100575 ']' 00:24:46.003 10:11:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.003 10:11:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.003 10:11:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.003 10:11:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.003 10:11:44 -- common/autotest_common.sh@10 -- # set +x 00:24:46.003 [2024-12-16 10:11:44.546216] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:46.003 [2024-12-16 10:11:44.546330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100575 ] 00:24:46.262 [2024-12-16 10:11:44.686772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.262 [2024-12-16 10:11:44.752455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.197 10:11:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:47.197 10:11:45 -- common/autotest_common.sh@862 -- # return 0 00:24:47.197 10:11:45 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:47.197 10:11:45 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:47.456 NVMe0n1 00:24:47.715 10:11:46 -- host/timeout.sh@84 -- # rpc_pid=100617 00:24:47.715 10:11:46 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:47.715 10:11:46 -- host/timeout.sh@86 -- # sleep 1 00:24:47.715 Running I/O for 10 seconds... 00:24:48.651 10:11:47 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.912 [2024-12-16 10:11:47.370779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370839] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370897] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370919] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370926] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370983] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.370997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371078] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371169] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371177] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371248] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371277] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371301] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371316] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371324] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371332] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371339] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371355] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371379] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371411] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371442] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371463] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1982ca0 is same with the state(5) to be set 00:24:48.913 [2024-12-16 10:11:47.371780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.913 [2024-12-16 10:11:47.371826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.913 [2024-12-16 10:11:47.371852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.913 [2024-12-16 10:11:47.371862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.913 [2024-12-16 10:11:47.371874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.913 [2024-12-16 10:11:47.371883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.913 [2024-12-16 10:11:47.371894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.913 [2024-12-16 10:11:47.371903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.913 [2024-12-16 10:11:47.371914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.913 [2024-12-16 10:11:47.371923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.913 [2024-12-16 10:11:47.371934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.371943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.371953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.371962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.371973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.371982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.371992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.914 [2024-12-16 10:11:47.372790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.914 [2024-12-16 10:11:47.372810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.914 [2024-12-16 10:11:47.372820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.372829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.372839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.372848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.372858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.372868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.372879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.372888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.372898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.372907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.372918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.372927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.372938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.372947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.372957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.372966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.372978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.372987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.372998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.373048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.373068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.373087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.373126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.373165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.373205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.373225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.373264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.373320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.915 [2024-12-16 10:11:47.373553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.373574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.373594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.373614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.373634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.373657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.915 [2024-12-16 10:11:47.373668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.915 [2024-12-16 10:11:47.373677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.373688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.916 [2024-12-16 10:11:47.373697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.373709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.373733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.373744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.373753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.373764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.373773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.373784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.373792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.373803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.373812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.373822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.373838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.373850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.373858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.373869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.373878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.373888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.373897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.373908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.916 [2024-12-16 10:11:47.373916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.373927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.373936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.373947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.916 [2024-12-16 10:11:47.373956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.373966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.373976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.373987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.916 [2024-12-16 10:11:47.373996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.916 [2024-12-16 10:11:47.374015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.916 [2024-12-16 10:11:47.374055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.916 [2024-12-16 10:11:47.374122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.916 [2024-12-16 10:11:47.374162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.916 [2024-12-16 10:11:47.374188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.916 [2024-12-16 10:11:47.374209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.916 [2024-12-16 10:11:47.374249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.916 [2024-12-16 10:11:47.374269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:48.916 [2024-12-16 10:11:47.374409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.916 [2024-12-16 10:11:47.374557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.916 [2024-12-16 10:11:47.374568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9f660 is same with the state(5) to be set 00:24:48.916 [2024-12-16 10:11:47.374580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.917 [2024-12-16 10:11:47.374588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.917 [2024-12-16 10:11:47.374603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12024 len:8 PRP1 0x0 PRP2 0x0 00:24:48.917 [2024-12-16 10:11:47.374612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.917 [2024-12-16 10:11:47.374666] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a9f660 was disconnected and freed. reset controller. 00:24:48.917 [2024-12-16 10:11:47.374768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.917 [2024-12-16 10:11:47.374794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.917 [2024-12-16 10:11:47.374806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.917 [2024-12-16 10:11:47.374816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.917 [2024-12-16 10:11:47.374825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.917 [2024-12-16 10:11:47.374834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.917 [2024-12-16 10:11:47.374844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.917 [2024-12-16 10:11:47.374853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.917 [2024-12-16 10:11:47.374861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a8c0 is same with the state(5) to be set 00:24:48.917 [2024-12-16 10:11:47.375071] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.917 [2024-12-16 10:11:47.375099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1a8c0 (9): Bad file descriptor 00:24:48.917 [2024-12-16 10:11:47.375219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-12-16 10:11:47.375269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.917 [2024-12-16 10:11:47.375291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1a8c0 with addr=10.0.0.2, port=4420 00:24:48.917 [2024-12-16 10:11:47.375302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a8c0 is same with the state(5) to be set 00:24:48.917 [2024-12-16 10:11:47.375321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1a8c0 (9): Bad file descriptor 00:24:48.917 [2024-12-16 10:11:47.375338] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.917 [2024-12-16 10:11:47.375347] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.917 [2024-12-16 10:11:47.375388] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.917 [2024-12-16 10:11:47.385605] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.917 [2024-12-16 10:11:47.385657] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.917 10:11:47 -- host/timeout.sh@90 -- # sleep 1 00:24:49.853 [2024-12-16 10:11:48.385742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-12-16 10:11:48.385838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:49.853 [2024-12-16 10:11:48.385856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1a8c0 with addr=10.0.0.2, port=4420 00:24:49.853 [2024-12-16 10:11:48.385866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a8c0 is same with the state(5) to be set 00:24:49.853 [2024-12-16 10:11:48.385886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1a8c0 (9): Bad file descriptor 00:24:49.853 [2024-12-16 10:11:48.385903] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:49.853 [2024-12-16 10:11:48.385912] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:49.853 [2024-12-16 10:11:48.385921] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:49.853 [2024-12-16 10:11:48.385951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.853 [2024-12-16 10:11:48.385963] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:49.853 10:11:48 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.112 [2024-12-16 10:11:48.682456] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.112 10:11:48 -- host/timeout.sh@92 -- # wait 100617 00:24:51.048 [2024-12-16 10:11:49.401499] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.614 00:24:57.614 Latency(us) 00:24:57.614 [2024-12-16T10:11:56.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.614 [2024-12-16T10:11:56.239Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:57.614 Verification LBA range: start 0x0 length 0x4000 00:24:57.614 NVMe0n1 : 10.01 10897.56 42.57 0.00 0.00 11726.46 1109.64 3019898.88 00:24:57.614 [2024-12-16T10:11:56.239Z] =================================================================================================================== 00:24:57.614 [2024-12-16T10:11:56.239Z] Total : 10897.56 42.57 0.00 0.00 11726.46 1109.64 3019898.88 00:24:57.614 0 00:24:57.614 10:11:56 -- host/timeout.sh@97 -- # rpc_pid=100739 00:24:57.615 10:11:56 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:57.615 10:11:56 -- host/timeout.sh@98 -- # sleep 1 00:24:57.874 Running I/O for 10 seconds... 00:24:58.810 10:11:57 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.072 [2024-12-16 10:11:57.476614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476791] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476798] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476806] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476827] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476842] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476870] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476877] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476938] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476961] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.476992] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477035] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477043] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477090] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477105] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477152] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477201] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477279] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477287] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.072 [2024-12-16 10:11:57.477320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.073 [2024-12-16 10:11:57.477328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.073 [2024-12-16 10:11:57.477335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.073 [2024-12-16 10:11:57.477343] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.073 [2024-12-16 10:11:57.477351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.073 [2024-12-16 10:11:57.477376] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.073 [2024-12-16 10:11:57.477384] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.073 [2024-12-16 10:11:57.477392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.073 [2024-12-16 10:11:57.477401] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17de110 is same with the state(5) to be set 00:24:59.073 [2024-12-16 10:11:57.477894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.477930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.477952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.477963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.477974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.477983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.477994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.073 [2024-12-16 10:11:57.478775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.073 [2024-12-16 10:11:57.478783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.478794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.478802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.478813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.478821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.478832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.074 [2024-12-16 10:11:57.478840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.478851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.074 [2024-12-16 10:11:57.478859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.478869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.074 [2024-12-16 10:11:57.478878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.478888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.074 [2024-12-16 10:11:57.478896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.478906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.478915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.478925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.074 [2024-12-16 10:11:57.478934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.478944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.074 [2024-12-16 10:11:57.478952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.478962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.074 [2024-12-16 10:11:57.478970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.478981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.074 [2024-12-16 10:11:57.478990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.074 [2024-12-16 10:11:57.479229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.074 [2024-12-16 10:11:57.479248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.074 [2024-12-16 10:11:57.479267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.074 [2024-12-16 10:11:57.479289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.074 [2024-12-16 10:11:57.479308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.074 [2024-12-16 10:11:57.479356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.074 [2024-12-16 10:11:57.479625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.074 [2024-12-16 10:11:57.479637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.479647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.479668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.479688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.479709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.479730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.479755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.479776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.479796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.479816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.479849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.479869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.479889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.479910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.479931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.479950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.479970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.479981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.479990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.480010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.480030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.480050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.480070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.480093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.480114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.480134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.480153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.480177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.480198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.480217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.480237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.480256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.480281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.480309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.480329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.480360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.480382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.480402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.480421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.480444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.480465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.075 [2024-12-16 10:11:57.480484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.075 [2024-12-16 10:11:57.480495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.075 [2024-12-16 10:11:57.480504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.076 [2024-12-16 10:11:57.480515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.076 [2024-12-16 10:11:57.480528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.076 [2024-12-16 10:11:57.480539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.076 [2024-12-16 10:11:57.480548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.076 [2024-12-16 10:11:57.480559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.076 [2024-12-16 10:11:57.480568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.076 [2024-12-16 10:11:57.480579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.076 [2024-12-16 10:11:57.480588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.076 [2024-12-16 10:11:57.480600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.076 [2024-12-16 10:11:57.480608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.076 [2024-12-16 10:11:57.480619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.076 [2024-12-16 10:11:57.480629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.076 [2024-12-16 10:11:57.480640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.076 [2024-12-16 10:11:57.480649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.076 [2024-12-16 10:11:57.480670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.076 [2024-12-16 10:11:57.480679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.076 [2024-12-16 10:11:57.480690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.076 [2024-12-16 10:11:57.480698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.076 [2024-12-16 10:11:57.480709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.076 [2024-12-16 10:11:57.480718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.076 [2024-12-16 10:11:57.480744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.076 [2024-12-16 10:11:57.480753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.076 [2024-12-16 10:11:57.480763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6b1d0 is same with the state(5) to be set 00:24:59.076 [2024-12-16 10:11:57.480774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:59.076 [2024-12-16 10:11:57.480782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:59.076 [2024-12-16 10:11:57.480792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5624 len:8 PRP1 0x0 PRP2 0x0 00:24:59.076 [2024-12-16 10:11:57.480802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.076 [2024-12-16 10:11:57.480855] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a6b1d0 was disconnected and freed. reset controller. 00:24:59.076 [2024-12-16 10:11:57.481095] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.076 [2024-12-16 10:11:57.481175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1a8c0 (9): Bad file descriptor 00:24:59.076 [2024-12-16 10:11:57.481294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-12-16 10:11:57.481345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-12-16 10:11:57.481361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1a8c0 with addr=10.0.0.2, port=4420 00:24:59.076 [2024-12-16 10:11:57.481392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a8c0 is same with the state(5) to be set 00:24:59.076 [2024-12-16 10:11:57.481418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1a8c0 (9): Bad file descriptor 00:24:59.076 [2024-12-16 10:11:57.481435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.076 [2024-12-16 10:11:57.481445] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.076 [2024-12-16 10:11:57.481456] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.076 [2024-12-16 10:11:57.481476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.076 [2024-12-16 10:11:57.481488] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.076 10:11:57 -- host/timeout.sh@101 -- # sleep 3 00:25:00.013 [2024-12-16 10:11:58.481562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.013 [2024-12-16 10:11:58.481627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.013 [2024-12-16 10:11:58.481644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1a8c0 with addr=10.0.0.2, port=4420 00:25:00.013 [2024-12-16 10:11:58.481654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a8c0 is same with the state(5) to be set 00:25:00.013 [2024-12-16 10:11:58.481673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1a8c0 (9): Bad file descriptor 00:25:00.013 [2024-12-16 10:11:58.481689] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.013 [2024-12-16 10:11:58.481698] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.013 [2024-12-16 10:11:58.481707] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.013 [2024-12-16 10:11:58.481726] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.013 [2024-12-16 10:11:58.481737] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.950 [2024-12-16 10:11:59.481802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-12-16 10:11:59.481892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-12-16 10:11:59.481908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1a8c0 with addr=10.0.0.2, port=4420 00:25:00.950 [2024-12-16 10:11:59.481918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a8c0 is same with the state(5) to be set 00:25:00.950 [2024-12-16 10:11:59.481935] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1a8c0 (9): Bad file descriptor 00:25:00.950 [2024-12-16 10:11:59.481950] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.950 [2024-12-16 10:11:59.481959] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.950 [2024-12-16 10:11:59.481967] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.950 [2024-12-16 10:11:59.481984] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.950 [2024-12-16 10:11:59.481994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.886 [2024-12-16 10:12:00.483866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-12-16 10:12:00.483961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-12-16 10:12:00.483979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1a8c0 with addr=10.0.0.2, port=4420 00:25:01.886 [2024-12-16 10:12:00.483992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1a8c0 is same with the state(5) to be set 00:25:01.886 [2024-12-16 10:12:00.484121] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1a8c0 (9): Bad file descriptor 00:25:01.886 [2024-12-16 10:12:00.484305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.886 [2024-12-16 10:12:00.484318] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.886 [2024-12-16 10:12:00.484333] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.886 [2024-12-16 10:12:00.486724] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.886 [2024-12-16 10:12:00.486766] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.886 10:12:00 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.147 [2024-12-16 10:12:00.761571] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.407 10:12:00 -- host/timeout.sh@103 -- # wait 100739 00:25:02.974 [2024-12-16 10:12:01.503253] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:08.245 00:25:08.245 Latency(us) 00:25:08.245 [2024-12-16T10:12:06.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.245 [2024-12-16T10:12:06.870Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:08.245 Verification LBA range: start 0x0 length 0x4000 00:25:08.245 NVMe0n1 : 10.01 9092.29 35.52 6579.23 0.00 8154.23 588.33 3019898.88 00:25:08.245 [2024-12-16T10:12:06.870Z] =================================================================================================================== 00:25:08.245 [2024-12-16T10:12:06.870Z] Total : 9092.29 35.52 6579.23 0.00 8154.23 0.00 3019898.88 00:25:08.245 0 00:25:08.245 10:12:06 -- host/timeout.sh@105 -- # killprocess 100575 00:25:08.245 10:12:06 -- common/autotest_common.sh@936 -- # '[' -z 100575 ']' 00:25:08.245 10:12:06 -- common/autotest_common.sh@940 -- # kill -0 100575 00:25:08.245 10:12:06 -- common/autotest_common.sh@941 -- # uname 00:25:08.245 10:12:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:08.245 10:12:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100575 00:25:08.245 killing process with pid 100575 00:25:08.245 Received shutdown signal, test time was about 10.000000 seconds 00:25:08.245 00:25:08.245 Latency(us) 00:25:08.245 [2024-12-16T10:12:06.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.245 [2024-12-16T10:12:06.870Z] =================================================================================================================== 00:25:08.245 [2024-12-16T10:12:06.870Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:08.245 10:12:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:08.245 10:12:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:08.245 10:12:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100575' 00:25:08.245 10:12:06 -- common/autotest_common.sh@955 -- # kill 100575 00:25:08.245 10:12:06 -- common/autotest_common.sh@960 -- # wait 100575 00:25:08.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:08.246 10:12:06 -- host/timeout.sh@110 -- # bdevperf_pid=100866 00:25:08.246 10:12:06 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:08.246 10:12:06 -- host/timeout.sh@112 -- # waitforlisten 100866 /var/tmp/bdevperf.sock 00:25:08.246 10:12:06 -- common/autotest_common.sh@829 -- # '[' -z 100866 ']' 00:25:08.246 10:12:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:08.246 10:12:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:08.246 10:12:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:08.246 10:12:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:08.246 10:12:06 -- common/autotest_common.sh@10 -- # set +x 00:25:08.246 [2024-12-16 10:12:06.670326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:08.246 [2024-12-16 10:12:06.670734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100866 ] 00:25:08.246 [2024-12-16 10:12:06.805922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.505 [2024-12-16 10:12:06.873077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.072 10:12:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:09.072 10:12:07 -- common/autotest_common.sh@862 -- # return 0 00:25:09.072 10:12:07 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 100866 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:09.072 10:12:07 -- host/timeout.sh@116 -- # dtrace_pid=100894 00:25:09.072 10:12:07 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:09.331 10:12:07 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:09.899 NVMe0n1 00:25:09.899 10:12:08 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:09.899 10:12:08 -- host/timeout.sh@124 -- # rpc_pid=100946 00:25:09.899 10:12:08 -- host/timeout.sh@125 -- # sleep 1 00:25:09.899 Running I/O for 10 seconds... 00:25:10.836 10:12:09 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:11.097 [2024-12-16 10:12:09.574162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574212] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574242] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574264] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574273] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574322] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574330] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574339] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574369] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574386] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574452] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574469] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574553] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574569] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.097 [2024-12-16 10:12:09.574593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574632] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574664] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574680] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574695] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574703] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574735] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574743] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574751] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574801] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574818] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574825] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574842] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574850] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574858] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574866] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574897] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574905] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574921] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574967] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.574994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575064] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575104] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575134] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575165] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575199] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1ba0 is same with the state(5) to be set 00:25:11.098 [2024-12-16 10:12:09.575573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.098 [2024-12-16 10:12:09.575665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.098 [2024-12-16 10:12:09.575698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.098 [2024-12-16 10:12:09.575710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.098 [2024-12-16 10:12:09.575721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.098 [2024-12-16 10:12:09.575747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.098 [2024-12-16 10:12:09.575758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.098 [2024-12-16 10:12:09.575768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.098 [2024-12-16 10:12:09.575779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.098 [2024-12-16 10:12:09.575787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.098 [2024-12-16 10:12:09.575798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.098 [2024-12-16 10:12:09.575807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.575817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.575826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.575835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.575843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.575853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.575892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.575914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.575923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.575933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.575941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.575962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.575971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.575981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.575989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.575999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.099 [2024-12-16 10:12:09.576717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.099 [2024-12-16 10:12:09.576725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.576734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.576743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.576752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.576767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.576777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.576803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.576814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.576835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.576845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.576854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.576864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.576873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.576883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.576891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.576901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.576910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.576920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.576939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.576949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.576958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.576968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.576976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.576987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.576995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.100 [2024-12-16 10:12:09.577742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.100 [2024-12-16 10:12:09.577751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.577760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.577769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.577779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.577787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.577797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.577807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.577817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.577826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.577836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.577844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.577854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.577873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.577884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.577892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.577901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.577921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.577931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.577945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.577956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.577964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.577974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.577982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.577992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.101 [2024-12-16 10:12:09.578499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.101 [2024-12-16 10:12:09.578507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.102 [2024-12-16 10:12:09.578518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.102 [2024-12-16 10:12:09.578546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.102 [2024-12-16 10:12:09.578556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.102 [2024-12-16 10:12:09.578565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.102 [2024-12-16 10:12:09.578575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.102 [2024-12-16 10:12:09.578584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.102 [2024-12-16 10:12:09.578594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.102 [2024-12-16 10:12:09.578603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.102 [2024-12-16 10:12:09.578613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.102 [2024-12-16 10:12:09.578621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.102 [2024-12-16 10:12:09.578645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a6780 is same with the state(5) to be set 00:25:11.102 [2024-12-16 10:12:09.578656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.102 [2024-12-16 10:12:09.578663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.102 [2024-12-16 10:12:09.578676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67800 len:8 PRP1 0x0 PRP2 0x0 00:25:11.102 [2024-12-16 10:12:09.578684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.102 [2024-12-16 10:12:09.578761] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19a6780 was disconnected and freed. reset controller. 00:25:11.102 [2024-12-16 10:12:09.578890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.102 [2024-12-16 10:12:09.578913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.102 [2024-12-16 10:12:09.578925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.102 [2024-12-16 10:12:09.578945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.102 [2024-12-16 10:12:09.578955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.102 [2024-12-16 10:12:09.578964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.102 [2024-12-16 10:12:09.578974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.102 [2024-12-16 10:12:09.578982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.102 [2024-12-16 10:12:09.578990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19218c0 is same with the state(5) to be set 00:25:11.102 [2024-12-16 10:12:09.579281] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.102 [2024-12-16 10:12:09.579312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19218c0 (9): Bad file descriptor 00:25:11.102 [2024-12-16 10:12:09.579440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.102 [2024-12-16 10:12:09.579496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.102 [2024-12-16 10:12:09.579528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19218c0 with addr=10.0.0.2, port=4420 00:25:11.102 [2024-12-16 10:12:09.579559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19218c0 is same with the state(5) to be set 00:25:11.102 [2024-12-16 10:12:09.579602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19218c0 (9): Bad file descriptor 00:25:11.102 [2024-12-16 10:12:09.579633] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.102 [2024-12-16 10:12:09.579649] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.102 [2024-12-16 10:12:09.579659] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.102 [2024-12-16 10:12:09.592653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.102 [2024-12-16 10:12:09.592685] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.102 10:12:09 -- host/timeout.sh@128 -- # wait 100946 00:25:13.005 [2024-12-16 10:12:11.592922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.005 [2024-12-16 10:12:11.593044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.005 [2024-12-16 10:12:11.593064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19218c0 with addr=10.0.0.2, port=4420 00:25:13.005 [2024-12-16 10:12:11.593079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19218c0 is same with the state(5) to be set 00:25:13.005 [2024-12-16 10:12:11.593111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19218c0 (9): Bad file descriptor 00:25:13.005 [2024-12-16 10:12:11.593163] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.005 [2024-12-16 10:12:11.593190] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.005 [2024-12-16 10:12:11.593202] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.005 [2024-12-16 10:12:11.593234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.005 [2024-12-16 10:12:11.593246] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.539 [2024-12-16 10:12:13.593479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.539 [2024-12-16 10:12:13.593638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.539 [2024-12-16 10:12:13.593668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19218c0 with addr=10.0.0.2, port=4420 00:25:15.539 [2024-12-16 10:12:13.593688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19218c0 is same with the state(5) to be set 00:25:15.539 [2024-12-16 10:12:13.593723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19218c0 (9): Bad file descriptor 00:25:15.539 [2024-12-16 10:12:13.593753] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.539 [2024-12-16 10:12:13.593777] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.539 [2024-12-16 10:12:13.593803] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.539 [2024-12-16 10:12:13.593853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.539 [2024-12-16 10:12:13.593872] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.442 [2024-12-16 10:12:15.593922] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.442 [2024-12-16 10:12:15.593979] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.442 [2024-12-16 10:12:15.594009] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.442 [2024-12-16 10:12:15.594019] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:17.442 [2024-12-16 10:12:15.594041] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.009 00:25:18.009 Latency(us) 00:25:18.009 [2024-12-16T10:12:16.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.009 [2024-12-16T10:12:16.634Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:18.009 NVMe0n1 : 8.19 3240.23 12.66 15.62 0.00 39253.40 2770.39 7015926.69 00:25:18.009 [2024-12-16T10:12:16.634Z] =================================================================================================================== 00:25:18.009 [2024-12-16T10:12:16.634Z] Total : 3240.23 12.66 15.62 0.00 39253.40 2770.39 7015926.69 00:25:18.009 0 00:25:18.009 10:12:16 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:18.009 Attaching 5 probes... 00:25:18.009 1448.523437: reset bdev controller NVMe0 00:25:18.009 1448.599298: reconnect bdev controller NVMe0 00:25:18.009 3461.989307: reconnect delay bdev controller NVMe0 00:25:18.009 3462.015588: reconnect bdev controller NVMe0 00:25:18.009 5462.534580: reconnect delay bdev controller NVMe0 00:25:18.009 5462.576258: reconnect bdev controller NVMe0 00:25:18.009 7463.160714: reconnect delay bdev controller NVMe0 00:25:18.009 7463.171889: reconnect bdev controller NVMe0 00:25:18.009 10:12:16 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:18.009 10:12:16 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:18.009 10:12:16 -- host/timeout.sh@136 -- # kill 100894 00:25:18.009 10:12:16 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:18.009 10:12:16 -- host/timeout.sh@139 -- # killprocess 100866 00:25:18.009 10:12:16 -- common/autotest_common.sh@936 -- # '[' -z 100866 ']' 00:25:18.009 10:12:16 -- common/autotest_common.sh@940 -- # kill -0 100866 00:25:18.009 10:12:16 -- common/autotest_common.sh@941 -- # uname 00:25:18.009 10:12:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:18.009 10:12:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100866 00:25:18.267 killing process with pid 100866 00:25:18.267 Received shutdown signal, test time was about 8.263775 seconds 00:25:18.267 00:25:18.267 Latency(us) 00:25:18.267 [2024-12-16T10:12:16.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.267 [2024-12-16T10:12:16.892Z] =================================================================================================================== 00:25:18.267 [2024-12-16T10:12:16.892Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.267 10:12:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:18.267 10:12:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:18.267 10:12:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100866' 00:25:18.267 10:12:16 -- common/autotest_common.sh@955 -- # kill 100866 00:25:18.267 10:12:16 -- common/autotest_common.sh@960 -- # wait 100866 00:25:18.267 10:12:16 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:18.525 10:12:17 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:18.525 10:12:17 -- host/timeout.sh@145 -- # nvmftestfini 00:25:18.525 10:12:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:18.525 10:12:17 -- nvmf/common.sh@116 -- # sync 00:25:18.525 10:12:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:18.525 10:12:17 -- nvmf/common.sh@119 -- # set +e 00:25:18.525 10:12:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:18.525 10:12:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:18.525 rmmod nvme_tcp 00:25:18.783 rmmod nvme_fabrics 00:25:18.783 rmmod nvme_keyring 00:25:18.783 10:12:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:18.783 10:12:17 -- nvmf/common.sh@123 -- # set -e 00:25:18.783 10:12:17 -- nvmf/common.sh@124 -- # return 0 00:25:18.783 10:12:17 -- nvmf/common.sh@477 -- # '[' -n 100283 ']' 00:25:18.783 10:12:17 -- nvmf/common.sh@478 -- # killprocess 100283 00:25:18.783 10:12:17 -- common/autotest_common.sh@936 -- # '[' -z 100283 ']' 00:25:18.783 10:12:17 -- common/autotest_common.sh@940 -- # kill -0 100283 00:25:18.783 10:12:17 -- common/autotest_common.sh@941 -- # uname 00:25:18.783 10:12:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:18.783 10:12:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100283 00:25:18.783 10:12:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:18.783 killing process with pid 100283 00:25:18.783 10:12:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:18.783 10:12:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100283' 00:25:18.783 10:12:17 -- common/autotest_common.sh@955 -- # kill 100283 00:25:18.783 10:12:17 -- common/autotest_common.sh@960 -- # wait 100283 00:25:19.042 10:12:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:19.042 10:12:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:19.042 10:12:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:19.042 10:12:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:19.042 10:12:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:19.042 10:12:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.042 10:12:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.042 10:12:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.042 10:12:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:19.042 00:25:19.042 real 0m47.075s 00:25:19.042 user 2m18.301s 00:25:19.042 sys 0m5.185s 00:25:19.042 10:12:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:19.042 ************************************ 00:25:19.042 END TEST nvmf_timeout 00:25:19.042 ************************************ 00:25:19.042 10:12:17 -- common/autotest_common.sh@10 -- # set +x 00:25:19.042 10:12:17 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:19.042 10:12:17 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:19.042 10:12:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:19.042 10:12:17 -- common/autotest_common.sh@10 -- # set +x 00:25:19.042 10:12:17 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:19.042 00:25:19.042 real 17m22.951s 00:25:19.042 user 55m18.337s 00:25:19.042 sys 3m55.059s 00:25:19.042 10:12:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:19.042 ************************************ 00:25:19.042 END TEST nvmf_tcp 00:25:19.042 ************************************ 00:25:19.042 10:12:17 -- common/autotest_common.sh@10 -- # set +x 00:25:19.042 10:12:17 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:25:19.042 10:12:17 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:19.042 10:12:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:19.042 10:12:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:19.042 10:12:17 -- common/autotest_common.sh@10 -- # set +x 00:25:19.042 ************************************ 00:25:19.042 START TEST spdkcli_nvmf_tcp 00:25:19.042 ************************************ 00:25:19.042 10:12:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:19.302 * Looking for test storage... 00:25:19.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:19.302 10:12:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:19.302 10:12:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:19.302 10:12:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:19.302 10:12:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:19.302 10:12:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:19.302 10:12:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:19.302 10:12:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:19.302 10:12:17 -- scripts/common.sh@335 -- # IFS=.-: 00:25:19.302 10:12:17 -- scripts/common.sh@335 -- # read -ra ver1 00:25:19.302 10:12:17 -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.302 10:12:17 -- scripts/common.sh@336 -- # read -ra ver2 00:25:19.302 10:12:17 -- scripts/common.sh@337 -- # local 'op=<' 00:25:19.302 10:12:17 -- scripts/common.sh@339 -- # ver1_l=2 00:25:19.302 10:12:17 -- scripts/common.sh@340 -- # ver2_l=1 00:25:19.302 10:12:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:19.302 10:12:17 -- scripts/common.sh@343 -- # case "$op" in 00:25:19.302 10:12:17 -- scripts/common.sh@344 -- # : 1 00:25:19.302 10:12:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:19.302 10:12:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.302 10:12:17 -- scripts/common.sh@364 -- # decimal 1 00:25:19.302 10:12:17 -- scripts/common.sh@352 -- # local d=1 00:25:19.302 10:12:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.302 10:12:17 -- scripts/common.sh@354 -- # echo 1 00:25:19.302 10:12:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:19.302 10:12:17 -- scripts/common.sh@365 -- # decimal 2 00:25:19.302 10:12:17 -- scripts/common.sh@352 -- # local d=2 00:25:19.302 10:12:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.302 10:12:17 -- scripts/common.sh@354 -- # echo 2 00:25:19.302 10:12:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:19.302 10:12:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:19.302 10:12:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:19.302 10:12:17 -- scripts/common.sh@367 -- # return 0 00:25:19.302 10:12:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.302 10:12:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:19.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.302 --rc genhtml_branch_coverage=1 00:25:19.302 --rc genhtml_function_coverage=1 00:25:19.302 --rc genhtml_legend=1 00:25:19.302 --rc geninfo_all_blocks=1 00:25:19.302 --rc geninfo_unexecuted_blocks=1 00:25:19.302 00:25:19.302 ' 00:25:19.303 10:12:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:19.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.303 --rc genhtml_branch_coverage=1 00:25:19.303 --rc genhtml_function_coverage=1 00:25:19.303 --rc genhtml_legend=1 00:25:19.303 --rc geninfo_all_blocks=1 00:25:19.303 --rc geninfo_unexecuted_blocks=1 00:25:19.303 00:25:19.303 ' 00:25:19.303 10:12:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:19.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.303 --rc genhtml_branch_coverage=1 00:25:19.303 --rc genhtml_function_coverage=1 00:25:19.303 --rc genhtml_legend=1 00:25:19.303 --rc geninfo_all_blocks=1 00:25:19.303 --rc geninfo_unexecuted_blocks=1 00:25:19.303 00:25:19.303 ' 00:25:19.303 10:12:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:19.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.303 --rc genhtml_branch_coverage=1 00:25:19.303 --rc genhtml_function_coverage=1 00:25:19.303 --rc genhtml_legend=1 00:25:19.303 --rc geninfo_all_blocks=1 00:25:19.303 --rc geninfo_unexecuted_blocks=1 00:25:19.303 00:25:19.303 ' 00:25:19.303 10:12:17 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:19.303 10:12:17 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:19.303 10:12:17 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:19.303 10:12:17 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:19.303 10:12:17 -- nvmf/common.sh@7 -- # uname -s 00:25:19.303 10:12:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.303 10:12:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.303 10:12:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.303 10:12:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.303 10:12:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.303 10:12:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.303 10:12:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.303 10:12:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.303 10:12:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.303 10:12:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.303 10:12:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:25:19.303 10:12:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:25:19.303 10:12:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.303 10:12:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.303 10:12:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:19.303 10:12:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:19.303 10:12:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.303 10:12:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.303 10:12:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.303 10:12:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.303 10:12:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.303 10:12:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.303 10:12:17 -- paths/export.sh@5 -- # export PATH 00:25:19.303 10:12:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.303 10:12:17 -- nvmf/common.sh@46 -- # : 0 00:25:19.303 10:12:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:19.303 10:12:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:19.303 10:12:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:19.303 10:12:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.303 10:12:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.303 10:12:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:19.303 10:12:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:19.303 10:12:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:19.303 10:12:17 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:19.303 10:12:17 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:19.303 10:12:17 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:19.303 10:12:17 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:19.303 10:12:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:19.303 10:12:17 -- common/autotest_common.sh@10 -- # set +x 00:25:19.303 10:12:17 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:19.303 10:12:17 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=101172 00:25:19.303 10:12:17 -- spdkcli/common.sh@34 -- # waitforlisten 101172 00:25:19.303 10:12:17 -- common/autotest_common.sh@829 -- # '[' -z 101172 ']' 00:25:19.303 10:12:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.303 10:12:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:19.303 10:12:17 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:19.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.303 10:12:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.303 10:12:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:19.303 10:12:17 -- common/autotest_common.sh@10 -- # set +x 00:25:19.303 [2024-12-16 10:12:17.897245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:19.303 [2024-12-16 10:12:17.897383] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101172 ] 00:25:19.562 [2024-12-16 10:12:18.034349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:19.562 [2024-12-16 10:12:18.104624] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:19.562 [2024-12-16 10:12:18.108414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.562 [2024-12-16 10:12:18.108431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.500 10:12:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:20.500 10:12:18 -- common/autotest_common.sh@862 -- # return 0 00:25:20.500 10:12:18 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:20.500 10:12:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:20.500 10:12:18 -- common/autotest_common.sh@10 -- # set +x 00:25:20.500 10:12:18 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:20.500 10:12:18 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:20.500 10:12:18 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:20.500 10:12:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:20.500 10:12:18 -- common/autotest_common.sh@10 -- # set +x 00:25:20.500 10:12:18 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:20.500 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:20.500 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:20.500 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:20.500 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:20.500 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:20.500 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:20.500 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:20.500 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:20.500 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:20.500 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:20.500 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:20.500 ' 00:25:21.068 [2024-12-16 10:12:19.435229] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:23.599 [2024-12-16 10:12:21.670473] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.534 [2024-12-16 10:12:22.959511] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:27.074 [2024-12-16 10:12:25.349051] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:28.977 [2024-12-16 10:12:27.411430] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:30.882 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:30.882 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:30.882 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:30.882 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:30.882 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:30.882 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:30.882 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:30.882 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:30.882 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:30.882 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:30.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:30.882 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:30.882 10:12:29 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:30.882 10:12:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:30.882 10:12:29 -- common/autotest_common.sh@10 -- # set +x 00:25:30.882 10:12:29 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:30.882 10:12:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:30.882 10:12:29 -- common/autotest_common.sh@10 -- # set +x 00:25:30.882 10:12:29 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:30.882 10:12:29 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:31.141 10:12:29 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:31.141 10:12:29 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:31.141 10:12:29 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:31.141 10:12:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:31.141 10:12:29 -- common/autotest_common.sh@10 -- # set +x 00:25:31.141 10:12:29 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:31.141 10:12:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:31.141 10:12:29 -- common/autotest_common.sh@10 -- # set +x 00:25:31.141 10:12:29 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:31.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:31.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:31.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:31.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:31.141 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:31.141 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:31.141 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:31.141 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:31.141 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:31.141 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:31.141 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:31.141 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:31.141 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:31.141 ' 00:25:36.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:36.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:36.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:36.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:36.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:36.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:36.412 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:36.412 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:36.412 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:36.412 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:36.412 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:36.412 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:36.412 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:36.412 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:36.670 10:12:35 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:36.670 10:12:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:36.670 10:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:36.670 10:12:35 -- spdkcli/nvmf.sh@90 -- # killprocess 101172 00:25:36.670 10:12:35 -- common/autotest_common.sh@936 -- # '[' -z 101172 ']' 00:25:36.670 10:12:35 -- common/autotest_common.sh@940 -- # kill -0 101172 00:25:36.670 10:12:35 -- common/autotest_common.sh@941 -- # uname 00:25:36.670 10:12:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:36.670 10:12:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101172 00:25:36.670 10:12:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:36.670 killing process with pid 101172 00:25:36.670 10:12:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:36.670 10:12:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101172' 00:25:36.670 10:12:35 -- common/autotest_common.sh@955 -- # kill 101172 00:25:36.670 [2024-12-16 10:12:35.198320] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:36.670 10:12:35 -- common/autotest_common.sh@960 -- # wait 101172 00:25:36.928 10:12:35 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:36.928 10:12:35 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:36.928 10:12:35 -- spdkcli/common.sh@13 -- # '[' -n 101172 ']' 00:25:36.928 10:12:35 -- spdkcli/common.sh@14 -- # killprocess 101172 00:25:36.928 10:12:35 -- common/autotest_common.sh@936 -- # '[' -z 101172 ']' 00:25:36.928 10:12:35 -- common/autotest_common.sh@940 -- # kill -0 101172 00:25:36.928 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (101172) - No such process 00:25:36.928 Process with pid 101172 is not found 00:25:36.928 10:12:35 -- common/autotest_common.sh@963 -- # echo 'Process with pid 101172 is not found' 00:25:36.928 10:12:35 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:36.928 10:12:35 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:36.928 10:12:35 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:36.928 00:25:36.928 real 0m17.856s 00:25:36.928 user 0m38.642s 00:25:36.928 sys 0m0.840s 00:25:36.928 10:12:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:36.928 10:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:36.928 ************************************ 00:25:36.928 END TEST spdkcli_nvmf_tcp 00:25:36.928 ************************************ 00:25:36.928 10:12:35 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:36.928 10:12:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:36.928 10:12:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:36.928 10:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:36.928 ************************************ 00:25:36.928 START TEST nvmf_identify_passthru 00:25:36.928 ************************************ 00:25:36.928 10:12:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:37.188 * Looking for test storage... 00:25:37.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:37.188 10:12:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:37.188 10:12:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:37.188 10:12:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:37.188 10:12:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:37.188 10:12:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:37.188 10:12:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:37.188 10:12:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:37.188 10:12:35 -- scripts/common.sh@335 -- # IFS=.-: 00:25:37.188 10:12:35 -- scripts/common.sh@335 -- # read -ra ver1 00:25:37.188 10:12:35 -- scripts/common.sh@336 -- # IFS=.-: 00:25:37.188 10:12:35 -- scripts/common.sh@336 -- # read -ra ver2 00:25:37.188 10:12:35 -- scripts/common.sh@337 -- # local 'op=<' 00:25:37.188 10:12:35 -- scripts/common.sh@339 -- # ver1_l=2 00:25:37.188 10:12:35 -- scripts/common.sh@340 -- # ver2_l=1 00:25:37.188 10:12:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:37.188 10:12:35 -- scripts/common.sh@343 -- # case "$op" in 00:25:37.188 10:12:35 -- scripts/common.sh@344 -- # : 1 00:25:37.188 10:12:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:37.188 10:12:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:37.188 10:12:35 -- scripts/common.sh@364 -- # decimal 1 00:25:37.188 10:12:35 -- scripts/common.sh@352 -- # local d=1 00:25:37.188 10:12:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:37.188 10:12:35 -- scripts/common.sh@354 -- # echo 1 00:25:37.188 10:12:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:37.188 10:12:35 -- scripts/common.sh@365 -- # decimal 2 00:25:37.188 10:12:35 -- scripts/common.sh@352 -- # local d=2 00:25:37.188 10:12:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:37.188 10:12:35 -- scripts/common.sh@354 -- # echo 2 00:25:37.188 10:12:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:37.188 10:12:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:37.188 10:12:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:37.188 10:12:35 -- scripts/common.sh@367 -- # return 0 00:25:37.188 10:12:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:37.188 10:12:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:37.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.188 --rc genhtml_branch_coverage=1 00:25:37.188 --rc genhtml_function_coverage=1 00:25:37.188 --rc genhtml_legend=1 00:25:37.188 --rc geninfo_all_blocks=1 00:25:37.188 --rc geninfo_unexecuted_blocks=1 00:25:37.188 00:25:37.188 ' 00:25:37.188 10:12:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:37.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.188 --rc genhtml_branch_coverage=1 00:25:37.188 --rc genhtml_function_coverage=1 00:25:37.188 --rc genhtml_legend=1 00:25:37.188 --rc geninfo_all_blocks=1 00:25:37.188 --rc geninfo_unexecuted_blocks=1 00:25:37.188 00:25:37.188 ' 00:25:37.188 10:12:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:37.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.188 --rc genhtml_branch_coverage=1 00:25:37.188 --rc genhtml_function_coverage=1 00:25:37.188 --rc genhtml_legend=1 00:25:37.188 --rc geninfo_all_blocks=1 00:25:37.188 --rc geninfo_unexecuted_blocks=1 00:25:37.188 00:25:37.188 ' 00:25:37.188 10:12:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:37.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.188 --rc genhtml_branch_coverage=1 00:25:37.188 --rc genhtml_function_coverage=1 00:25:37.188 --rc genhtml_legend=1 00:25:37.188 --rc geninfo_all_blocks=1 00:25:37.188 --rc geninfo_unexecuted_blocks=1 00:25:37.188 00:25:37.188 ' 00:25:37.188 10:12:35 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:37.188 10:12:35 -- nvmf/common.sh@7 -- # uname -s 00:25:37.188 10:12:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.188 10:12:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.188 10:12:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.188 10:12:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.188 10:12:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.188 10:12:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.188 10:12:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.188 10:12:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.188 10:12:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.188 10:12:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.188 10:12:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:25:37.188 10:12:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:25:37.188 10:12:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.188 10:12:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.188 10:12:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:37.188 10:12:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:37.188 10:12:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.188 10:12:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.188 10:12:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.188 10:12:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.188 10:12:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.188 10:12:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.188 10:12:35 -- paths/export.sh@5 -- # export PATH 00:25:37.188 10:12:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.188 10:12:35 -- nvmf/common.sh@46 -- # : 0 00:25:37.188 10:12:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:37.188 10:12:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:37.188 10:12:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:37.188 10:12:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.188 10:12:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.188 10:12:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:37.188 10:12:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:37.188 10:12:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:37.188 10:12:35 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:37.188 10:12:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.188 10:12:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.188 10:12:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.188 10:12:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.188 10:12:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.189 10:12:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.189 10:12:35 -- paths/export.sh@5 -- # export PATH 00:25:37.189 10:12:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.189 10:12:35 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:37.189 10:12:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:37.189 10:12:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.189 10:12:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:37.189 10:12:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:37.189 10:12:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:37.189 10:12:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.189 10:12:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:37.189 10:12:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.189 10:12:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:37.189 10:12:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:37.189 10:12:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:37.189 10:12:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:37.189 10:12:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:37.189 10:12:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:37.189 10:12:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.189 10:12:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.189 10:12:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:37.189 10:12:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:37.189 10:12:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:37.189 10:12:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:37.189 10:12:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:37.189 10:12:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.189 10:12:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:37.189 10:12:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:37.189 10:12:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:37.189 10:12:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:37.189 10:12:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:37.189 10:12:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:37.189 Cannot find device "nvmf_tgt_br" 00:25:37.189 10:12:35 -- nvmf/common.sh@154 -- # true 00:25:37.189 10:12:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:37.189 Cannot find device "nvmf_tgt_br2" 00:25:37.189 10:12:35 -- nvmf/common.sh@155 -- # true 00:25:37.189 10:12:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:37.189 10:12:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:37.189 Cannot find device "nvmf_tgt_br" 00:25:37.189 10:12:35 -- nvmf/common.sh@157 -- # true 00:25:37.189 10:12:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:37.447 Cannot find device "nvmf_tgt_br2" 00:25:37.447 10:12:35 -- nvmf/common.sh@158 -- # true 00:25:37.447 10:12:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:37.447 10:12:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:37.447 10:12:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:37.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:37.447 10:12:35 -- nvmf/common.sh@161 -- # true 00:25:37.448 10:12:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:37.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:37.448 10:12:35 -- nvmf/common.sh@162 -- # true 00:25:37.448 10:12:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:37.448 10:12:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:37.448 10:12:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:37.448 10:12:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:37.448 10:12:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:37.448 10:12:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:37.448 10:12:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:37.448 10:12:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:37.448 10:12:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:37.448 10:12:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:37.448 10:12:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:37.448 10:12:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:37.448 10:12:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:37.448 10:12:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:37.448 10:12:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:37.448 10:12:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:37.448 10:12:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:37.448 10:12:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:37.448 10:12:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:37.448 10:12:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:37.448 10:12:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:37.706 10:12:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:37.706 10:12:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:37.706 10:12:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:37.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:25:37.706 00:25:37.706 --- 10.0.0.2 ping statistics --- 00:25:37.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.706 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:25:37.706 10:12:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:37.706 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:37.706 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:25:37.706 00:25:37.706 --- 10.0.0.3 ping statistics --- 00:25:37.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.706 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:25:37.706 10:12:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:37.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:25:37.706 00:25:37.706 --- 10.0.0.1 ping statistics --- 00:25:37.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.706 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:25:37.707 10:12:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.707 10:12:36 -- nvmf/common.sh@421 -- # return 0 00:25:37.707 10:12:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:37.707 10:12:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.707 10:12:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:37.707 10:12:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:37.707 10:12:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.707 10:12:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:37.707 10:12:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:37.707 10:12:36 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:37.707 10:12:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:37.707 10:12:36 -- common/autotest_common.sh@10 -- # set +x 00:25:37.707 10:12:36 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:37.707 10:12:36 -- common/autotest_common.sh@1519 -- # bdfs=() 00:25:37.707 10:12:36 -- common/autotest_common.sh@1519 -- # local bdfs 00:25:37.707 10:12:36 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:37.707 10:12:36 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:37.707 10:12:36 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:37.707 10:12:36 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:37.707 10:12:36 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:37.707 10:12:36 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:37.707 10:12:36 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:37.707 10:12:36 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:25:37.707 10:12:36 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:37.707 10:12:36 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:25:37.707 10:12:36 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:37.707 10:12:36 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:37.707 10:12:36 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:37.707 10:12:36 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:37.707 10:12:36 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:37.965 10:12:36 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:37.965 10:12:36 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:37.965 10:12:36 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:37.965 10:12:36 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:37.965 10:12:36 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:37.965 10:12:36 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:37.965 10:12:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:37.965 10:12:36 -- common/autotest_common.sh@10 -- # set +x 00:25:37.965 10:12:36 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:37.965 10:12:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:37.965 10:12:36 -- common/autotest_common.sh@10 -- # set +x 00:25:38.224 10:12:36 -- target/identify_passthru.sh@31 -- # nvmfpid=101678 00:25:38.224 10:12:36 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:38.224 10:12:36 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:38.224 10:12:36 -- target/identify_passthru.sh@35 -- # waitforlisten 101678 00:25:38.224 10:12:36 -- common/autotest_common.sh@829 -- # '[' -z 101678 ']' 00:25:38.224 10:12:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.224 10:12:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:38.224 10:12:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.224 10:12:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:38.224 10:12:36 -- common/autotest_common.sh@10 -- # set +x 00:25:38.224 [2024-12-16 10:12:36.651947] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:38.224 [2024-12-16 10:12:36.652034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.224 [2024-12-16 10:12:36.791616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:38.482 [2024-12-16 10:12:36.874435] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:38.482 [2024-12-16 10:12:36.874595] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.482 [2024-12-16 10:12:36.874606] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.482 [2024-12-16 10:12:36.874614] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.482 [2024-12-16 10:12:36.874754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.482 [2024-12-16 10:12:36.874896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.482 [2024-12-16 10:12:36.875330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:38.482 [2024-12-16 10:12:36.875364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.418 10:12:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:39.418 10:12:37 -- common/autotest_common.sh@862 -- # return 0 00:25:39.418 10:12:37 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:39.418 10:12:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.418 10:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:39.419 10:12:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.419 10:12:37 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:39.419 10:12:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.419 10:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:39.419 [2024-12-16 10:12:37.843961] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:39.419 10:12:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.419 10:12:37 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:39.419 10:12:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.419 10:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:39.419 [2024-12-16 10:12:37.854274] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.419 10:12:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.419 10:12:37 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:39.419 10:12:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:39.419 10:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:39.419 10:12:37 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:39.419 10:12:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.419 10:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:39.419 Nvme0n1 00:25:39.419 10:12:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.419 10:12:37 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:39.419 10:12:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.419 10:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:39.419 10:12:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.419 10:12:37 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:39.419 10:12:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.419 10:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:39.419 10:12:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.419 10:12:37 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:39.419 10:12:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.419 10:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:39.419 [2024-12-16 10:12:38.002755] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.419 10:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.419 10:12:38 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:39.419 10:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.419 10:12:38 -- common/autotest_common.sh@10 -- # set +x 00:25:39.419 [2024-12-16 10:12:38.010428] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:39.419 [ 00:25:39.419 { 00:25:39.419 "allow_any_host": true, 00:25:39.419 "hosts": [], 00:25:39.419 "listen_addresses": [], 00:25:39.419 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:39.419 "subtype": "Discovery" 00:25:39.419 }, 00:25:39.419 { 00:25:39.419 "allow_any_host": true, 00:25:39.419 "hosts": [], 00:25:39.419 "listen_addresses": [ 00:25:39.419 { 00:25:39.419 "adrfam": "IPv4", 00:25:39.419 "traddr": "10.0.0.2", 00:25:39.419 "transport": "TCP", 00:25:39.419 "trsvcid": "4420", 00:25:39.419 "trtype": "TCP" 00:25:39.419 } 00:25:39.419 ], 00:25:39.419 "max_cntlid": 65519, 00:25:39.419 "max_namespaces": 1, 00:25:39.419 "min_cntlid": 1, 00:25:39.419 "model_number": "SPDK bdev Controller", 00:25:39.419 "namespaces": [ 00:25:39.419 { 00:25:39.419 "bdev_name": "Nvme0n1", 00:25:39.419 "name": "Nvme0n1", 00:25:39.419 "nguid": "C2A4704F3AE74923ADEC86ACEACB3DC7", 00:25:39.419 "nsid": 1, 00:25:39.419 "uuid": "c2a4704f-3ae7-4923-adec-86aceacb3dc7" 00:25:39.419 } 00:25:39.419 ], 00:25:39.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.419 "serial_number": "SPDK00000000000001", 00:25:39.419 "subtype": "NVMe" 00:25:39.419 } 00:25:39.419 ] 00:25:39.419 10:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.419 10:12:38 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:39.419 10:12:38 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:39.419 10:12:38 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:39.678 10:12:38 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:39.678 10:12:38 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:39.678 10:12:38 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:39.678 10:12:38 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:39.938 10:12:38 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:39.938 10:12:38 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:39.938 10:12:38 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:39.938 10:12:38 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:39.938 10:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.938 10:12:38 -- common/autotest_common.sh@10 -- # set +x 00:25:39.938 10:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.938 10:12:38 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:39.938 10:12:38 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:39.938 10:12:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:39.938 10:12:38 -- nvmf/common.sh@116 -- # sync 00:25:39.938 10:12:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:39.938 10:12:38 -- nvmf/common.sh@119 -- # set +e 00:25:39.938 10:12:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:39.938 10:12:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:39.938 rmmod nvme_tcp 00:25:39.938 rmmod nvme_fabrics 00:25:40.197 rmmod nvme_keyring 00:25:40.197 10:12:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:40.197 10:12:38 -- nvmf/common.sh@123 -- # set -e 00:25:40.197 10:12:38 -- nvmf/common.sh@124 -- # return 0 00:25:40.197 10:12:38 -- nvmf/common.sh@477 -- # '[' -n 101678 ']' 00:25:40.197 10:12:38 -- nvmf/common.sh@478 -- # killprocess 101678 00:25:40.197 10:12:38 -- common/autotest_common.sh@936 -- # '[' -z 101678 ']' 00:25:40.197 10:12:38 -- common/autotest_common.sh@940 -- # kill -0 101678 00:25:40.197 10:12:38 -- common/autotest_common.sh@941 -- # uname 00:25:40.197 10:12:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:40.197 10:12:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101678 00:25:40.197 10:12:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:40.197 10:12:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:40.197 killing process with pid 101678 00:25:40.197 10:12:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101678' 00:25:40.197 10:12:38 -- common/autotest_common.sh@955 -- # kill 101678 00:25:40.197 [2024-12-16 10:12:38.631639] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:40.197 10:12:38 -- common/autotest_common.sh@960 -- # wait 101678 00:25:40.461 10:12:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:40.461 10:12:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:40.461 10:12:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:40.461 10:12:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:40.461 10:12:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:40.461 10:12:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.461 10:12:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:40.461 10:12:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.461 10:12:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:40.461 00:25:40.461 real 0m3.405s 00:25:40.461 user 0m8.512s 00:25:40.461 sys 0m0.950s 00:25:40.461 10:12:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:40.461 ************************************ 00:25:40.461 END TEST nvmf_identify_passthru 00:25:40.461 ************************************ 00:25:40.461 10:12:38 -- common/autotest_common.sh@10 -- # set +x 00:25:40.461 10:12:38 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:40.461 10:12:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:40.461 10:12:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:40.461 10:12:38 -- common/autotest_common.sh@10 -- # set +x 00:25:40.461 ************************************ 00:25:40.461 START TEST nvmf_dif 00:25:40.461 ************************************ 00:25:40.461 10:12:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:40.461 * Looking for test storage... 00:25:40.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:40.461 10:12:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:40.461 10:12:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:40.461 10:12:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:40.748 10:12:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:40.748 10:12:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:40.748 10:12:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:40.748 10:12:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:40.748 10:12:39 -- scripts/common.sh@335 -- # IFS=.-: 00:25:40.748 10:12:39 -- scripts/common.sh@335 -- # read -ra ver1 00:25:40.748 10:12:39 -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.748 10:12:39 -- scripts/common.sh@336 -- # read -ra ver2 00:25:40.748 10:12:39 -- scripts/common.sh@337 -- # local 'op=<' 00:25:40.748 10:12:39 -- scripts/common.sh@339 -- # ver1_l=2 00:25:40.748 10:12:39 -- scripts/common.sh@340 -- # ver2_l=1 00:25:40.748 10:12:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:40.748 10:12:39 -- scripts/common.sh@343 -- # case "$op" in 00:25:40.748 10:12:39 -- scripts/common.sh@344 -- # : 1 00:25:40.749 10:12:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:40.749 10:12:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.749 10:12:39 -- scripts/common.sh@364 -- # decimal 1 00:25:40.749 10:12:39 -- scripts/common.sh@352 -- # local d=1 00:25:40.749 10:12:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.749 10:12:39 -- scripts/common.sh@354 -- # echo 1 00:25:40.749 10:12:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:40.749 10:12:39 -- scripts/common.sh@365 -- # decimal 2 00:25:40.749 10:12:39 -- scripts/common.sh@352 -- # local d=2 00:25:40.749 10:12:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.749 10:12:39 -- scripts/common.sh@354 -- # echo 2 00:25:40.749 10:12:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:40.749 10:12:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:40.749 10:12:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:40.749 10:12:39 -- scripts/common.sh@367 -- # return 0 00:25:40.749 10:12:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.749 10:12:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:40.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.749 --rc genhtml_branch_coverage=1 00:25:40.749 --rc genhtml_function_coverage=1 00:25:40.749 --rc genhtml_legend=1 00:25:40.749 --rc geninfo_all_blocks=1 00:25:40.749 --rc geninfo_unexecuted_blocks=1 00:25:40.749 00:25:40.749 ' 00:25:40.749 10:12:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:40.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.749 --rc genhtml_branch_coverage=1 00:25:40.749 --rc genhtml_function_coverage=1 00:25:40.749 --rc genhtml_legend=1 00:25:40.749 --rc geninfo_all_blocks=1 00:25:40.749 --rc geninfo_unexecuted_blocks=1 00:25:40.749 00:25:40.749 ' 00:25:40.749 10:12:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:40.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.749 --rc genhtml_branch_coverage=1 00:25:40.749 --rc genhtml_function_coverage=1 00:25:40.749 --rc genhtml_legend=1 00:25:40.749 --rc geninfo_all_blocks=1 00:25:40.749 --rc geninfo_unexecuted_blocks=1 00:25:40.749 00:25:40.749 ' 00:25:40.749 10:12:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:40.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.749 --rc genhtml_branch_coverage=1 00:25:40.749 --rc genhtml_function_coverage=1 00:25:40.749 --rc genhtml_legend=1 00:25:40.749 --rc geninfo_all_blocks=1 00:25:40.749 --rc geninfo_unexecuted_blocks=1 00:25:40.749 00:25:40.749 ' 00:25:40.749 10:12:39 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:40.749 10:12:39 -- nvmf/common.sh@7 -- # uname -s 00:25:40.749 10:12:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.749 10:12:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.749 10:12:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.749 10:12:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.749 10:12:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.749 10:12:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.749 10:12:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.749 10:12:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.749 10:12:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.749 10:12:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.749 10:12:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:25:40.749 10:12:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:25:40.749 10:12:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.749 10:12:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.749 10:12:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:40.749 10:12:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:40.749 10:12:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.749 10:12:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.749 10:12:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.749 10:12:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.749 10:12:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.749 10:12:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.749 10:12:39 -- paths/export.sh@5 -- # export PATH 00:25:40.749 10:12:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.749 10:12:39 -- nvmf/common.sh@46 -- # : 0 00:25:40.749 10:12:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:40.749 10:12:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:40.749 10:12:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:40.749 10:12:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.749 10:12:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.749 10:12:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:40.749 10:12:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:40.749 10:12:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:40.749 10:12:39 -- target/dif.sh@15 -- # NULL_META=16 00:25:40.749 10:12:39 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:40.749 10:12:39 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:40.749 10:12:39 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:40.749 10:12:39 -- target/dif.sh@135 -- # nvmftestinit 00:25:40.749 10:12:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:40.749 10:12:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.749 10:12:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:40.749 10:12:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:40.749 10:12:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:40.749 10:12:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.749 10:12:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:40.749 10:12:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.749 10:12:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:40.749 10:12:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:40.749 10:12:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:40.749 10:12:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:40.749 10:12:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:40.749 10:12:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:40.749 10:12:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.749 10:12:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.749 10:12:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:40.749 10:12:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:40.749 10:12:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:40.749 10:12:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:40.749 10:12:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:40.749 10:12:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.749 10:12:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:40.749 10:12:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:40.749 10:12:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:40.749 10:12:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:40.749 10:12:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:40.749 10:12:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:40.749 Cannot find device "nvmf_tgt_br" 00:25:40.749 10:12:39 -- nvmf/common.sh@154 -- # true 00:25:40.749 10:12:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:40.749 Cannot find device "nvmf_tgt_br2" 00:25:40.749 10:12:39 -- nvmf/common.sh@155 -- # true 00:25:40.749 10:12:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:40.749 10:12:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:40.749 Cannot find device "nvmf_tgt_br" 00:25:40.749 10:12:39 -- nvmf/common.sh@157 -- # true 00:25:40.749 10:12:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:40.749 Cannot find device "nvmf_tgt_br2" 00:25:40.749 10:12:39 -- nvmf/common.sh@158 -- # true 00:25:40.749 10:12:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:40.749 10:12:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:40.749 10:12:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:40.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.749 10:12:39 -- nvmf/common.sh@161 -- # true 00:25:40.749 10:12:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:40.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.749 10:12:39 -- nvmf/common.sh@162 -- # true 00:25:40.749 10:12:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:40.749 10:12:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:40.749 10:12:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:40.749 10:12:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:40.749 10:12:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:41.008 10:12:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:41.008 10:12:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:41.008 10:12:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:41.008 10:12:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:41.008 10:12:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:41.008 10:12:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:41.008 10:12:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:41.008 10:12:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:41.008 10:12:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:41.008 10:12:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:41.008 10:12:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:41.008 10:12:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:41.008 10:12:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:41.008 10:12:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:41.008 10:12:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:41.008 10:12:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:41.008 10:12:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:41.008 10:12:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:41.008 10:12:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:41.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:25:41.008 00:25:41.008 --- 10.0.0.2 ping statistics --- 00:25:41.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.009 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:25:41.009 10:12:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:41.009 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:41.009 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:25:41.009 00:25:41.009 --- 10.0.0.3 ping statistics --- 00:25:41.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.009 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:25:41.009 10:12:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:41.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:25:41.009 00:25:41.009 --- 10.0.0.1 ping statistics --- 00:25:41.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.009 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:41.009 10:12:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.009 10:12:39 -- nvmf/common.sh@421 -- # return 0 00:25:41.009 10:12:39 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:41.009 10:12:39 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:41.267 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:41.526 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:41.526 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:41.526 10:12:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.526 10:12:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:41.526 10:12:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:41.526 10:12:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.526 10:12:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:41.526 10:12:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:41.526 10:12:39 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:41.526 10:12:39 -- target/dif.sh@137 -- # nvmfappstart 00:25:41.526 10:12:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:41.526 10:12:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:41.526 10:12:39 -- common/autotest_common.sh@10 -- # set +x 00:25:41.526 10:12:39 -- nvmf/common.sh@469 -- # nvmfpid=102037 00:25:41.526 10:12:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:41.526 10:12:39 -- nvmf/common.sh@470 -- # waitforlisten 102037 00:25:41.526 10:12:39 -- common/autotest_common.sh@829 -- # '[' -z 102037 ']' 00:25:41.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.526 10:12:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.526 10:12:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.526 10:12:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.526 10:12:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.526 10:12:39 -- common/autotest_common.sh@10 -- # set +x 00:25:41.526 [2024-12-16 10:12:40.059550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:41.526 [2024-12-16 10:12:40.059868] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.784 [2024-12-16 10:12:40.203421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.784 [2024-12-16 10:12:40.294011] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:41.784 [2024-12-16 10:12:40.294221] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.784 [2024-12-16 10:12:40.294241] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.784 [2024-12-16 10:12:40.294254] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.784 [2024-12-16 10:12:40.294300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.720 10:12:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.720 10:12:41 -- common/autotest_common.sh@862 -- # return 0 00:25:42.720 10:12:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:42.720 10:12:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:42.720 10:12:41 -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 10:12:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.720 10:12:41 -- target/dif.sh@139 -- # create_transport 00:25:42.720 10:12:41 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:42.720 10:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.720 10:12:41 -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 [2024-12-16 10:12:41.132911] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.720 10:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.720 10:12:41 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:42.720 10:12:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:42.720 10:12:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:42.720 10:12:41 -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 ************************************ 00:25:42.720 START TEST fio_dif_1_default 00:25:42.720 ************************************ 00:25:42.720 10:12:41 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:25:42.720 10:12:41 -- target/dif.sh@86 -- # create_subsystems 0 00:25:42.720 10:12:41 -- target/dif.sh@28 -- # local sub 00:25:42.720 10:12:41 -- target/dif.sh@30 -- # for sub in "$@" 00:25:42.720 10:12:41 -- target/dif.sh@31 -- # create_subsystem 0 00:25:42.720 10:12:41 -- target/dif.sh@18 -- # local sub_id=0 00:25:42.720 10:12:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:42.720 10:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.720 10:12:41 -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 bdev_null0 00:25:42.720 10:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.720 10:12:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:42.720 10:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.720 10:12:41 -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 10:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.720 10:12:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:42.720 10:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.720 10:12:41 -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 10:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.720 10:12:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:42.720 10:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.720 10:12:41 -- common/autotest_common.sh@10 -- # set +x 00:25:42.720 [2024-12-16 10:12:41.181083] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.720 10:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.720 10:12:41 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:42.720 10:12:41 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:42.720 10:12:41 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:42.720 10:12:41 -- nvmf/common.sh@520 -- # config=() 00:25:42.720 10:12:41 -- nvmf/common.sh@520 -- # local subsystem config 00:25:42.720 10:12:41 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:42.720 10:12:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:42.720 10:12:41 -- target/dif.sh@82 -- # gen_fio_conf 00:25:42.720 10:12:41 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:42.720 10:12:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:42.720 { 00:25:42.720 "params": { 00:25:42.720 "name": "Nvme$subsystem", 00:25:42.720 "trtype": "$TEST_TRANSPORT", 00:25:42.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.720 "adrfam": "ipv4", 00:25:42.720 "trsvcid": "$NVMF_PORT", 00:25:42.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.720 "hdgst": ${hdgst:-false}, 00:25:42.720 "ddgst": ${ddgst:-false} 00:25:42.720 }, 00:25:42.720 "method": "bdev_nvme_attach_controller" 00:25:42.720 } 00:25:42.720 EOF 00:25:42.720 )") 00:25:42.720 10:12:41 -- target/dif.sh@54 -- # local file 00:25:42.720 10:12:41 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:42.720 10:12:41 -- target/dif.sh@56 -- # cat 00:25:42.721 10:12:41 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:42.721 10:12:41 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:42.721 10:12:41 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:42.721 10:12:41 -- common/autotest_common.sh@1330 -- # shift 00:25:42.721 10:12:41 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:42.721 10:12:41 -- nvmf/common.sh@542 -- # cat 00:25:42.721 10:12:41 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:42.721 10:12:41 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:42.721 10:12:41 -- target/dif.sh@72 -- # (( file <= files )) 00:25:42.721 10:12:41 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:42.721 10:12:41 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:42.721 10:12:41 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:42.721 10:12:41 -- nvmf/common.sh@544 -- # jq . 00:25:42.721 10:12:41 -- nvmf/common.sh@545 -- # IFS=, 00:25:42.721 10:12:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:42.721 "params": { 00:25:42.721 "name": "Nvme0", 00:25:42.721 "trtype": "tcp", 00:25:42.721 "traddr": "10.0.0.2", 00:25:42.721 "adrfam": "ipv4", 00:25:42.721 "trsvcid": "4420", 00:25:42.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:42.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:42.721 "hdgst": false, 00:25:42.721 "ddgst": false 00:25:42.721 }, 00:25:42.721 "method": "bdev_nvme_attach_controller" 00:25:42.721 }' 00:25:42.721 10:12:41 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:42.721 10:12:41 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:42.721 10:12:41 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:42.721 10:12:41 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:42.721 10:12:41 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:42.721 10:12:41 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:42.721 10:12:41 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:42.721 10:12:41 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:42.721 10:12:41 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:42.721 10:12:41 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:42.979 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:42.979 fio-3.35 00:25:42.979 Starting 1 thread 00:25:43.238 [2024-12-16 10:12:41.826071] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:43.238 [2024-12-16 10:12:41.826161] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:55.443 00:25:55.443 filename0: (groupid=0, jobs=1): err= 0: pid=102123: Mon Dec 16 10:12:51 2024 00:25:55.443 read: IOPS=5224, BW=20.4MiB/s (21.4MB/s)(204MiB/10001msec) 00:25:55.443 slat (nsec): min=5735, max=70681, avg=6961.57, stdev=2504.52 00:25:55.443 clat (usec): min=348, max=42438, avg=744.71, stdev=3792.88 00:25:55.443 lat (usec): min=354, max=42446, avg=751.67, stdev=3792.94 00:25:55.443 clat percentiles (usec): 00:25:55.443 | 1.00th=[ 355], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 367], 00:25:55.443 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 379], 60.00th=[ 388], 00:25:55.443 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[ 433], 95.00th=[ 461], 00:25:55.443 | 99.00th=[ 562], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:25:55.443 | 99.99th=[42206] 00:25:55.443 bw ( KiB/s): min= 8832, max=29888, per=100.00%, avg=20939.79, stdev=5943.30, samples=19 00:25:55.443 iops : min= 2208, max= 7472, avg=5234.95, stdev=1485.82, samples=19 00:25:55.443 lat (usec) : 500=97.98%, 750=1.12% 00:25:55.443 lat (msec) : 2=0.02%, 10=0.01%, 50=0.87% 00:25:55.443 cpu : usr=87.46%, sys=10.35%, ctx=16, majf=0, minf=0 00:25:55.443 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:55.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:55.443 issued rwts: total=52248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:55.443 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:55.443 00:25:55.443 Run status group 0 (all jobs): 00:25:55.443 READ: bw=20.4MiB/s (21.4MB/s), 20.4MiB/s-20.4MiB/s (21.4MB/s-21.4MB/s), io=204MiB (214MB), run=10001-10001msec 00:25:55.443 10:12:52 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:55.444 10:12:52 -- target/dif.sh@43 -- # local sub 00:25:55.444 10:12:52 -- target/dif.sh@45 -- # for sub in "$@" 00:25:55.444 10:12:52 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:55.444 10:12:52 -- target/dif.sh@36 -- # local sub_id=0 00:25:55.444 10:12:52 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:55.444 10:12:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.444 10:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.444 10:12:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.444 10:12:52 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:55.444 10:12:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.444 10:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.444 ************************************ 00:25:55.444 END TEST fio_dif_1_default 00:25:55.444 ************************************ 00:25:55.444 10:12:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.444 00:25:55.444 real 0m11.015s 00:25:55.444 user 0m9.400s 00:25:55.444 sys 0m1.304s 00:25:55.444 10:12:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:55.444 10:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.444 10:12:52 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:55.444 10:12:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:55.444 10:12:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:55.444 10:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.444 ************************************ 00:25:55.444 START TEST fio_dif_1_multi_subsystems 00:25:55.444 ************************************ 00:25:55.444 10:12:52 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:25:55.444 10:12:52 -- target/dif.sh@92 -- # local files=1 00:25:55.444 10:12:52 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:55.444 10:12:52 -- target/dif.sh@28 -- # local sub 00:25:55.444 10:12:52 -- target/dif.sh@30 -- # for sub in "$@" 00:25:55.444 10:12:52 -- target/dif.sh@31 -- # create_subsystem 0 00:25:55.444 10:12:52 -- target/dif.sh@18 -- # local sub_id=0 00:25:55.444 10:12:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:55.444 10:12:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.444 10:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.444 bdev_null0 00:25:55.444 10:12:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.444 10:12:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:55.444 10:12:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.444 10:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.444 10:12:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.444 10:12:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:55.444 10:12:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.444 10:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.444 10:12:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.444 10:12:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:55.444 10:12:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.444 10:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.444 [2024-12-16 10:12:52.247456] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.444 10:12:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.444 10:12:52 -- target/dif.sh@30 -- # for sub in "$@" 00:25:55.444 10:12:52 -- target/dif.sh@31 -- # create_subsystem 1 00:25:55.444 10:12:52 -- target/dif.sh@18 -- # local sub_id=1 00:25:55.444 10:12:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:55.444 10:12:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.444 10:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.444 bdev_null1 00:25:55.444 10:12:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.444 10:12:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:55.444 10:12:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.444 10:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.444 10:12:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.444 10:12:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:55.444 10:12:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.444 10:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.444 10:12:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.444 10:12:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:55.444 10:12:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.444 10:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:55.444 10:12:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.444 10:12:52 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:55.444 10:12:52 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:55.444 10:12:52 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:55.444 10:12:52 -- nvmf/common.sh@520 -- # config=() 00:25:55.444 10:12:52 -- nvmf/common.sh@520 -- # local subsystem config 00:25:55.444 10:12:52 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:55.444 10:12:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:55.444 10:12:52 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:55.444 10:12:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:55.444 { 00:25:55.444 "params": { 00:25:55.444 "name": "Nvme$subsystem", 00:25:55.444 "trtype": "$TEST_TRANSPORT", 00:25:55.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:55.444 "adrfam": "ipv4", 00:25:55.444 "trsvcid": "$NVMF_PORT", 00:25:55.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:55.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:55.444 "hdgst": ${hdgst:-false}, 00:25:55.444 "ddgst": ${ddgst:-false} 00:25:55.444 }, 00:25:55.444 "method": "bdev_nvme_attach_controller" 00:25:55.444 } 00:25:55.444 EOF 00:25:55.444 )") 00:25:55.444 10:12:52 -- target/dif.sh@82 -- # gen_fio_conf 00:25:55.444 10:12:52 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:55.444 10:12:52 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:55.444 10:12:52 -- target/dif.sh@54 -- # local file 00:25:55.444 10:12:52 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:55.444 10:12:52 -- target/dif.sh@56 -- # cat 00:25:55.444 10:12:52 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:55.444 10:12:52 -- common/autotest_common.sh@1330 -- # shift 00:25:55.444 10:12:52 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:55.444 10:12:52 -- nvmf/common.sh@542 -- # cat 00:25:55.444 10:12:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:55.444 10:12:52 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:55.444 10:12:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:55.444 10:12:52 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:55.444 10:12:52 -- target/dif.sh@72 -- # (( file <= files )) 00:25:55.444 10:12:52 -- target/dif.sh@73 -- # cat 00:25:55.444 10:12:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:55.444 10:12:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:55.444 10:12:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:55.444 { 00:25:55.444 "params": { 00:25:55.444 "name": "Nvme$subsystem", 00:25:55.444 "trtype": "$TEST_TRANSPORT", 00:25:55.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:55.444 "adrfam": "ipv4", 00:25:55.444 "trsvcid": "$NVMF_PORT", 00:25:55.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:55.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:55.444 "hdgst": ${hdgst:-false}, 00:25:55.444 "ddgst": ${ddgst:-false} 00:25:55.444 }, 00:25:55.444 "method": "bdev_nvme_attach_controller" 00:25:55.444 } 00:25:55.444 EOF 00:25:55.444 )") 00:25:55.444 10:12:52 -- nvmf/common.sh@542 -- # cat 00:25:55.444 10:12:52 -- target/dif.sh@72 -- # (( file++ )) 00:25:55.444 10:12:52 -- target/dif.sh@72 -- # (( file <= files )) 00:25:55.444 10:12:52 -- nvmf/common.sh@544 -- # jq . 00:25:55.444 10:12:52 -- nvmf/common.sh@545 -- # IFS=, 00:25:55.444 10:12:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:55.444 "params": { 00:25:55.444 "name": "Nvme0", 00:25:55.444 "trtype": "tcp", 00:25:55.444 "traddr": "10.0.0.2", 00:25:55.444 "adrfam": "ipv4", 00:25:55.444 "trsvcid": "4420", 00:25:55.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:55.444 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:55.444 "hdgst": false, 00:25:55.444 "ddgst": false 00:25:55.444 }, 00:25:55.444 "method": "bdev_nvme_attach_controller" 00:25:55.444 },{ 00:25:55.444 "params": { 00:25:55.444 "name": "Nvme1", 00:25:55.444 "trtype": "tcp", 00:25:55.444 "traddr": "10.0.0.2", 00:25:55.444 "adrfam": "ipv4", 00:25:55.444 "trsvcid": "4420", 00:25:55.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:55.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:55.444 "hdgst": false, 00:25:55.444 "ddgst": false 00:25:55.444 }, 00:25:55.444 "method": "bdev_nvme_attach_controller" 00:25:55.444 }' 00:25:55.444 10:12:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:55.444 10:12:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:55.444 10:12:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:55.444 10:12:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:55.444 10:12:52 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:55.444 10:12:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:55.445 10:12:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:55.445 10:12:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:55.445 10:12:52 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:55.445 10:12:52 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:55.445 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:55.445 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:55.445 fio-3.35 00:25:55.445 Starting 2 threads 00:25:55.445 [2024-12-16 10:12:53.041027] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:55.445 [2024-12-16 10:12:53.041104] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:05.417 00:26:05.417 filename0: (groupid=0, jobs=1): err= 0: pid=102284: Mon Dec 16 10:13:03 2024 00:26:05.417 read: IOPS=196, BW=788KiB/s (807kB/s)(7904KiB/10035msec) 00:26:05.417 slat (nsec): min=6017, max=59494, avg=8534.10, stdev=4239.80 00:26:05.417 clat (usec): min=353, max=41991, avg=20286.98, stdev=20214.63 00:26:05.417 lat (usec): min=359, max=42005, avg=20295.52, stdev=20214.74 00:26:05.417 clat percentiles (usec): 00:26:05.417 | 1.00th=[ 379], 5.00th=[ 392], 10.00th=[ 400], 20.00th=[ 416], 00:26:05.417 | 30.00th=[ 445], 40.00th=[ 529], 50.00th=[ 1004], 60.00th=[40633], 00:26:05.417 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:05.417 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:26:05.417 | 99.99th=[42206] 00:26:05.417 bw ( KiB/s): min= 448, max= 1440, per=31.52%, avg=788.75, stdev=268.26, samples=20 00:26:05.417 iops : min= 112, max= 360, avg=197.15, stdev=67.06, samples=20 00:26:05.417 lat (usec) : 500=36.59%, 750=11.59%, 1000=1.72% 00:26:05.417 lat (msec) : 2=1.11%, 50=48.99% 00:26:05.417 cpu : usr=95.92%, sys=3.56%, ctx=12, majf=0, minf=7 00:26:05.417 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.417 issued rwts: total=1976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.417 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:05.417 filename1: (groupid=0, jobs=1): err= 0: pid=102285: Mon Dec 16 10:13:03 2024 00:26:05.417 read: IOPS=429, BW=1718KiB/s (1759kB/s)(16.8MiB/10003msec) 00:26:05.417 slat (nsec): min=5824, max=37625, avg=7569.65, stdev=2837.97 00:26:05.417 clat (usec): min=352, max=42455, avg=9290.32, stdev=16688.27 00:26:05.417 lat (usec): min=358, max=42465, avg=9297.89, stdev=16688.39 00:26:05.417 clat percentiles (usec): 00:26:05.417 | 1.00th=[ 371], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 400], 00:26:05.417 | 30.00th=[ 416], 40.00th=[ 449], 50.00th=[ 523], 60.00th=[ 553], 00:26:05.417 | 70.00th=[ 578], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:26:05.417 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:26:05.417 | 99.99th=[42206] 00:26:05.417 bw ( KiB/s): min= 544, max= 7392, per=66.80%, avg=1670.63, stdev=1640.60, samples=19 00:26:05.417 iops : min= 136, max= 1848, avg=417.63, stdev=410.17, samples=19 00:26:05.417 lat (usec) : 500=47.42%, 750=28.19%, 1000=1.86% 00:26:05.417 lat (msec) : 2=0.74%, 50=21.79% 00:26:05.417 cpu : usr=95.68%, sys=3.80%, ctx=18, majf=0, minf=0 00:26:05.417 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.417 issued rwts: total=4296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.417 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:05.417 00:26:05.417 Run status group 0 (all jobs): 00:26:05.417 READ: bw=2500KiB/s (2560kB/s), 788KiB/s-1718KiB/s (807kB/s-1759kB/s), io=24.5MiB (25.7MB), run=10003-10035msec 00:26:05.417 10:13:03 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:05.417 10:13:03 -- target/dif.sh@43 -- # local sub 00:26:05.417 10:13:03 -- target/dif.sh@45 -- # for sub in "$@" 00:26:05.417 10:13:03 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:05.417 10:13:03 -- target/dif.sh@36 -- # local sub_id=0 00:26:05.417 10:13:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:05.417 10:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.417 10:13:03 -- common/autotest_common.sh@10 -- # set +x 00:26:05.417 10:13:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.417 10:13:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:05.417 10:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.418 10:13:03 -- common/autotest_common.sh@10 -- # set +x 00:26:05.418 10:13:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.418 10:13:03 -- target/dif.sh@45 -- # for sub in "$@" 00:26:05.418 10:13:03 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:05.418 10:13:03 -- target/dif.sh@36 -- # local sub_id=1 00:26:05.418 10:13:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:05.418 10:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.418 10:13:03 -- common/autotest_common.sh@10 -- # set +x 00:26:05.418 10:13:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.418 10:13:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:05.418 10:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.418 10:13:03 -- common/autotest_common.sh@10 -- # set +x 00:26:05.418 ************************************ 00:26:05.418 END TEST fio_dif_1_multi_subsystems 00:26:05.418 ************************************ 00:26:05.418 10:13:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.418 00:26:05.418 real 0m11.279s 00:26:05.418 user 0m20.042s 00:26:05.418 sys 0m1.050s 00:26:05.418 10:13:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:05.418 10:13:03 -- common/autotest_common.sh@10 -- # set +x 00:26:05.418 10:13:03 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:05.418 10:13:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:05.418 10:13:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:05.418 10:13:03 -- common/autotest_common.sh@10 -- # set +x 00:26:05.418 ************************************ 00:26:05.418 START TEST fio_dif_rand_params 00:26:05.418 ************************************ 00:26:05.418 10:13:03 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:26:05.418 10:13:03 -- target/dif.sh@100 -- # local NULL_DIF 00:26:05.418 10:13:03 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:05.418 10:13:03 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:05.418 10:13:03 -- target/dif.sh@103 -- # bs=128k 00:26:05.418 10:13:03 -- target/dif.sh@103 -- # numjobs=3 00:26:05.418 10:13:03 -- target/dif.sh@103 -- # iodepth=3 00:26:05.418 10:13:03 -- target/dif.sh@103 -- # runtime=5 00:26:05.418 10:13:03 -- target/dif.sh@105 -- # create_subsystems 0 00:26:05.418 10:13:03 -- target/dif.sh@28 -- # local sub 00:26:05.418 10:13:03 -- target/dif.sh@30 -- # for sub in "$@" 00:26:05.418 10:13:03 -- target/dif.sh@31 -- # create_subsystem 0 00:26:05.418 10:13:03 -- target/dif.sh@18 -- # local sub_id=0 00:26:05.418 10:13:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:05.418 10:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.418 10:13:03 -- common/autotest_common.sh@10 -- # set +x 00:26:05.418 bdev_null0 00:26:05.418 10:13:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.418 10:13:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:05.418 10:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.418 10:13:03 -- common/autotest_common.sh@10 -- # set +x 00:26:05.418 10:13:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.418 10:13:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:05.418 10:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.418 10:13:03 -- common/autotest_common.sh@10 -- # set +x 00:26:05.418 10:13:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.418 10:13:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:05.418 10:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.418 10:13:03 -- common/autotest_common.sh@10 -- # set +x 00:26:05.418 [2024-12-16 10:13:03.588306] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.418 10:13:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.418 10:13:03 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:05.418 10:13:03 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:05.418 10:13:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:05.418 10:13:03 -- nvmf/common.sh@520 -- # config=() 00:26:05.418 10:13:03 -- nvmf/common.sh@520 -- # local subsystem config 00:26:05.418 10:13:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:05.418 10:13:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:05.418 { 00:26:05.418 "params": { 00:26:05.418 "name": "Nvme$subsystem", 00:26:05.418 "trtype": "$TEST_TRANSPORT", 00:26:05.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.418 "adrfam": "ipv4", 00:26:05.418 "trsvcid": "$NVMF_PORT", 00:26:05.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.418 "hdgst": ${hdgst:-false}, 00:26:05.418 "ddgst": ${ddgst:-false} 00:26:05.418 }, 00:26:05.418 "method": "bdev_nvme_attach_controller" 00:26:05.418 } 00:26:05.418 EOF 00:26:05.418 )") 00:26:05.418 10:13:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.418 10:13:03 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.418 10:13:03 -- target/dif.sh@82 -- # gen_fio_conf 00:26:05.418 10:13:03 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:05.418 10:13:03 -- target/dif.sh@54 -- # local file 00:26:05.418 10:13:03 -- target/dif.sh@56 -- # cat 00:26:05.418 10:13:03 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:05.418 10:13:03 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:05.418 10:13:03 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:05.418 10:13:03 -- nvmf/common.sh@542 -- # cat 00:26:05.418 10:13:03 -- common/autotest_common.sh@1330 -- # shift 00:26:05.418 10:13:03 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:05.418 10:13:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:05.418 10:13:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:05.418 10:13:03 -- target/dif.sh@72 -- # (( file <= files )) 00:26:05.418 10:13:03 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:05.418 10:13:03 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:05.418 10:13:03 -- nvmf/common.sh@544 -- # jq . 00:26:05.418 10:13:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:05.418 10:13:03 -- nvmf/common.sh@545 -- # IFS=, 00:26:05.418 10:13:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:05.418 "params": { 00:26:05.418 "name": "Nvme0", 00:26:05.418 "trtype": "tcp", 00:26:05.418 "traddr": "10.0.0.2", 00:26:05.418 "adrfam": "ipv4", 00:26:05.418 "trsvcid": "4420", 00:26:05.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:05.418 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:05.418 "hdgst": false, 00:26:05.418 "ddgst": false 00:26:05.418 }, 00:26:05.418 "method": "bdev_nvme_attach_controller" 00:26:05.418 }' 00:26:05.418 10:13:03 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:05.418 10:13:03 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:05.418 10:13:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:05.418 10:13:03 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:05.418 10:13:03 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:05.418 10:13:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:05.418 10:13:03 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:05.418 10:13:03 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:05.418 10:13:03 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:05.418 10:13:03 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.418 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:05.418 ... 00:26:05.418 fio-3.35 00:26:05.418 Starting 3 threads 00:26:05.676 [2024-12-16 10:13:04.229004] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:05.676 [2024-12-16 10:13:04.229442] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:10.972 00:26:10.972 filename0: (groupid=0, jobs=1): err= 0: pid=102442: Mon Dec 16 10:13:09 2024 00:26:10.972 read: IOPS=262, BW=32.8MiB/s (34.4MB/s)(164MiB/5003msec) 00:26:10.972 slat (nsec): min=6399, max=56253, avg=10175.84, stdev=4068.58 00:26:10.972 clat (usec): min=5940, max=54619, avg=11402.91, stdev=3653.85 00:26:10.972 lat (usec): min=5949, max=54630, avg=11413.08, stdev=3653.80 00:26:10.972 clat percentiles (usec): 00:26:10.972 | 1.00th=[ 6980], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10552], 00:26:10.972 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11207], 60.00th=[11469], 00:26:10.972 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11994], 95.00th=[12256], 00:26:10.972 | 99.00th=[13042], 99.50th=[52691], 99.90th=[54789], 99.95th=[54789], 00:26:10.972 | 99.99th=[54789] 00:26:10.972 bw ( KiB/s): min=31488, max=34560, per=33.88%, avg=33536.00, stdev=1214.31, samples=9 00:26:10.972 iops : min= 246, max= 270, avg=262.00, stdev= 9.49, samples=9 00:26:10.972 lat (msec) : 10=9.28%, 20=90.03%, 100=0.68% 00:26:10.972 cpu : usr=92.90%, sys=5.68%, ctx=7, majf=0, minf=0 00:26:10.972 IO depths : 1=5.5%, 2=94.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:10.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.972 issued rwts: total=1314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.972 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:10.972 filename0: (groupid=0, jobs=1): err= 0: pid=102443: Mon Dec 16 10:13:09 2024 00:26:10.972 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(186MiB/5005msec) 00:26:10.972 slat (nsec): min=5236, max=39715, avg=11303.88, stdev=3970.35 00:26:10.972 clat (usec): min=4651, max=51170, avg=10066.57, stdev=2759.91 00:26:10.972 lat (usec): min=4661, max=51188, avg=10077.87, stdev=2760.39 00:26:10.972 clat percentiles (usec): 00:26:10.972 | 1.00th=[ 6456], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9372], 00:26:10.972 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:26:10.972 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10814], 95.00th=[11076], 00:26:10.972 | 99.00th=[12125], 99.50th=[15533], 99.90th=[51119], 99.95th=[51119], 00:26:10.972 | 99.99th=[51119] 00:26:10.972 bw ( KiB/s): min=32256, max=39936, per=38.33%, avg=37944.89, stdev=2263.75, samples=9 00:26:10.972 iops : min= 252, max= 312, avg=296.44, stdev=17.69, samples=9 00:26:10.972 lat (msec) : 10=49.29%, 20=50.30%, 50=0.07%, 100=0.34% 00:26:10.972 cpu : usr=92.51%, sys=5.92%, ctx=95, majf=0, minf=0 00:26:10.972 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:10.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.972 issued rwts: total=1489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.972 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:10.972 filename0: (groupid=0, jobs=1): err= 0: pid=102444: Mon Dec 16 10:13:09 2024 00:26:10.972 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(134MiB/5002msec) 00:26:10.972 slat (nsec): min=6449, max=48138, avg=9252.78, stdev=4030.05 00:26:10.972 clat (usec): min=8120, max=17053, avg=14025.67, stdev=1532.43 00:26:10.972 lat (usec): min=8141, max=17101, avg=14034.93, stdev=1532.44 00:26:10.972 clat percentiles (usec): 00:26:10.972 | 1.00th=[ 8455], 5.00th=[ 9503], 10.00th=[13042], 20.00th=[13566], 00:26:10.972 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14222], 60.00th=[14484], 00:26:10.972 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15401], 95.00th=[15795], 00:26:10.972 | 99.00th=[16188], 99.50th=[16581], 99.90th=[17171], 99.95th=[17171], 00:26:10.972 | 99.99th=[17171] 00:26:10.972 bw ( KiB/s): min=26112, max=30012, per=27.59%, avg=27313.33, stdev=1404.77, samples=9 00:26:10.972 iops : min= 204, max= 234, avg=213.33, stdev=10.86, samples=9 00:26:10.972 lat (msec) : 10=5.34%, 20=94.66% 00:26:10.972 cpu : usr=93.94%, sys=4.76%, ctx=3, majf=0, minf=9 00:26:10.972 IO depths : 1=32.5%, 2=67.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:10.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.972 issued rwts: total=1068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.972 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:10.972 00:26:10.972 Run status group 0 (all jobs): 00:26:10.972 READ: bw=96.7MiB/s (101MB/s), 26.7MiB/s-37.2MiB/s (28.0MB/s-39.0MB/s), io=484MiB (507MB), run=5002-5005msec 00:26:10.972 10:13:09 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:10.972 10:13:09 -- target/dif.sh@43 -- # local sub 00:26:10.972 10:13:09 -- target/dif.sh@45 -- # for sub in "$@" 00:26:10.973 10:13:09 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:10.973 10:13:09 -- target/dif.sh@36 -- # local sub_id=0 00:26:10.973 10:13:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:10.973 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.973 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:26:10.973 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.973 10:13:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:10.973 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.973 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:26:10.973 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.973 10:13:09 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:10.973 10:13:09 -- target/dif.sh@109 -- # bs=4k 00:26:10.973 10:13:09 -- target/dif.sh@109 -- # numjobs=8 00:26:10.973 10:13:09 -- target/dif.sh@109 -- # iodepth=16 00:26:10.973 10:13:09 -- target/dif.sh@109 -- # runtime= 00:26:10.973 10:13:09 -- target/dif.sh@109 -- # files=2 00:26:10.973 10:13:09 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:10.973 10:13:09 -- target/dif.sh@28 -- # local sub 00:26:10.973 10:13:09 -- target/dif.sh@30 -- # for sub in "$@" 00:26:10.973 10:13:09 -- target/dif.sh@31 -- # create_subsystem 0 00:26:10.973 10:13:09 -- target/dif.sh@18 -- # local sub_id=0 00:26:10.973 10:13:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:10.973 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.973 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:26:10.973 bdev_null0 00:26:10.973 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.973 10:13:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:10.973 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.973 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:26:10.973 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.973 10:13:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:10.973 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.973 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:26:11.232 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.232 10:13:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:11.232 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.232 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:26:11.232 [2024-12-16 10:13:09.604545] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.232 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.232 10:13:09 -- target/dif.sh@30 -- # for sub in "$@" 00:26:11.232 10:13:09 -- target/dif.sh@31 -- # create_subsystem 1 00:26:11.232 10:13:09 -- target/dif.sh@18 -- # local sub_id=1 00:26:11.232 10:13:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:11.232 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.232 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:26:11.232 bdev_null1 00:26:11.232 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.232 10:13:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:11.232 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.232 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:26:11.232 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.232 10:13:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:11.232 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.232 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:26:11.232 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.232 10:13:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:11.232 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.232 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:26:11.232 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.232 10:13:09 -- target/dif.sh@30 -- # for sub in "$@" 00:26:11.232 10:13:09 -- target/dif.sh@31 -- # create_subsystem 2 00:26:11.232 10:13:09 -- target/dif.sh@18 -- # local sub_id=2 00:26:11.232 10:13:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:11.232 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.232 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:26:11.232 bdev_null2 00:26:11.232 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.232 10:13:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:11.232 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.232 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:26:11.232 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.232 10:13:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:11.232 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.232 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:26:11.232 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.232 10:13:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:11.232 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.232 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:26:11.232 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.232 10:13:09 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:11.232 10:13:09 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:11.232 10:13:09 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:11.232 10:13:09 -- nvmf/common.sh@520 -- # config=() 00:26:11.232 10:13:09 -- nvmf/common.sh@520 -- # local subsystem config 00:26:11.232 10:13:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:11.232 10:13:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:11.232 { 00:26:11.232 "params": { 00:26:11.232 "name": "Nvme$subsystem", 00:26:11.232 "trtype": "$TEST_TRANSPORT", 00:26:11.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.232 "adrfam": "ipv4", 00:26:11.232 "trsvcid": "$NVMF_PORT", 00:26:11.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.232 "hdgst": ${hdgst:-false}, 00:26:11.232 "ddgst": ${ddgst:-false} 00:26:11.232 }, 00:26:11.232 "method": "bdev_nvme_attach_controller" 00:26:11.232 } 00:26:11.232 EOF 00:26:11.232 )") 00:26:11.232 10:13:09 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:11.232 10:13:09 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:11.232 10:13:09 -- target/dif.sh@82 -- # gen_fio_conf 00:26:11.232 10:13:09 -- target/dif.sh@54 -- # local file 00:26:11.232 10:13:09 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:11.232 10:13:09 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:11.232 10:13:09 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:11.232 10:13:09 -- target/dif.sh@56 -- # cat 00:26:11.232 10:13:09 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:11.232 10:13:09 -- nvmf/common.sh@542 -- # cat 00:26:11.232 10:13:09 -- common/autotest_common.sh@1330 -- # shift 00:26:11.232 10:13:09 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:11.232 10:13:09 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:11.232 10:13:09 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:11.232 10:13:09 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:11.232 10:13:09 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:11.232 10:13:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:11.232 10:13:09 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:11.232 10:13:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:11.232 { 00:26:11.232 "params": { 00:26:11.232 "name": "Nvme$subsystem", 00:26:11.232 "trtype": "$TEST_TRANSPORT", 00:26:11.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.232 "adrfam": "ipv4", 00:26:11.232 "trsvcid": "$NVMF_PORT", 00:26:11.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.232 "hdgst": ${hdgst:-false}, 00:26:11.232 "ddgst": ${ddgst:-false} 00:26:11.232 }, 00:26:11.232 "method": "bdev_nvme_attach_controller" 00:26:11.232 } 00:26:11.232 EOF 00:26:11.232 )") 00:26:11.232 10:13:09 -- target/dif.sh@72 -- # (( file <= files )) 00:26:11.232 10:13:09 -- target/dif.sh@73 -- # cat 00:26:11.232 10:13:09 -- nvmf/common.sh@542 -- # cat 00:26:11.232 10:13:09 -- target/dif.sh@72 -- # (( file++ )) 00:26:11.232 10:13:09 -- target/dif.sh@72 -- # (( file <= files )) 00:26:11.232 10:13:09 -- target/dif.sh@73 -- # cat 00:26:11.232 10:13:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:11.232 10:13:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:11.232 { 00:26:11.232 "params": { 00:26:11.232 "name": "Nvme$subsystem", 00:26:11.232 "trtype": "$TEST_TRANSPORT", 00:26:11.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:11.232 "adrfam": "ipv4", 00:26:11.232 "trsvcid": "$NVMF_PORT", 00:26:11.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:11.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:11.232 "hdgst": ${hdgst:-false}, 00:26:11.232 "ddgst": ${ddgst:-false} 00:26:11.232 }, 00:26:11.232 "method": "bdev_nvme_attach_controller" 00:26:11.232 } 00:26:11.232 EOF 00:26:11.232 )") 00:26:11.232 10:13:09 -- nvmf/common.sh@542 -- # cat 00:26:11.232 10:13:09 -- target/dif.sh@72 -- # (( file++ )) 00:26:11.232 10:13:09 -- target/dif.sh@72 -- # (( file <= files )) 00:26:11.232 10:13:09 -- nvmf/common.sh@544 -- # jq . 00:26:11.232 10:13:09 -- nvmf/common.sh@545 -- # IFS=, 00:26:11.232 10:13:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:11.232 "params": { 00:26:11.232 "name": "Nvme0", 00:26:11.232 "trtype": "tcp", 00:26:11.232 "traddr": "10.0.0.2", 00:26:11.232 "adrfam": "ipv4", 00:26:11.232 "trsvcid": "4420", 00:26:11.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:11.232 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:11.232 "hdgst": false, 00:26:11.232 "ddgst": false 00:26:11.232 }, 00:26:11.232 "method": "bdev_nvme_attach_controller" 00:26:11.232 },{ 00:26:11.232 "params": { 00:26:11.232 "name": "Nvme1", 00:26:11.232 "trtype": "tcp", 00:26:11.232 "traddr": "10.0.0.2", 00:26:11.232 "adrfam": "ipv4", 00:26:11.232 "trsvcid": "4420", 00:26:11.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:11.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:11.232 "hdgst": false, 00:26:11.232 "ddgst": false 00:26:11.232 }, 00:26:11.232 "method": "bdev_nvme_attach_controller" 00:26:11.232 },{ 00:26:11.232 "params": { 00:26:11.232 "name": "Nvme2", 00:26:11.232 "trtype": "tcp", 00:26:11.232 "traddr": "10.0.0.2", 00:26:11.232 "adrfam": "ipv4", 00:26:11.232 "trsvcid": "4420", 00:26:11.232 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:11.232 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:11.232 "hdgst": false, 00:26:11.232 "ddgst": false 00:26:11.232 }, 00:26:11.232 "method": "bdev_nvme_attach_controller" 00:26:11.232 }' 00:26:11.232 10:13:09 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:11.232 10:13:09 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:11.232 10:13:09 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:11.232 10:13:09 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:11.232 10:13:09 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:11.232 10:13:09 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:11.232 10:13:09 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:11.232 10:13:09 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:11.232 10:13:09 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:11.232 10:13:09 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:11.491 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:11.491 ... 00:26:11.491 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:11.491 ... 00:26:11.491 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:11.491 ... 00:26:11.491 fio-3.35 00:26:11.491 Starting 24 threads 00:26:12.058 [2024-12-16 10:13:10.536301] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:12.058 [2024-12-16 10:13:10.536381] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:24.259 00:26:24.259 filename0: (groupid=0, jobs=1): err= 0: pid=102539: Mon Dec 16 10:13:20 2024 00:26:24.259 read: IOPS=275, BW=1103KiB/s (1129kB/s)(10.8MiB/10036msec) 00:26:24.259 slat (usec): min=4, max=8034, avg=17.14, stdev=215.57 00:26:24.259 clat (msec): min=14, max=124, avg=57.87, stdev=21.00 00:26:24.259 lat (msec): min=14, max=124, avg=57.89, stdev=21.00 00:26:24.259 clat percentiles (msec): 00:26:24.259 | 1.00th=[ 18], 5.00th=[ 25], 10.00th=[ 34], 20.00th=[ 40], 00:26:24.259 | 30.00th=[ 46], 40.00th=[ 49], 50.00th=[ 58], 60.00th=[ 62], 00:26:24.259 | 70.00th=[ 68], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 96], 00:26:24.259 | 99.00th=[ 118], 99.50th=[ 118], 99.90th=[ 125], 99.95th=[ 125], 00:26:24.259 | 99.99th=[ 125] 00:26:24.259 bw ( KiB/s): min= 816, max= 2148, per=4.43%, avg=1100.45, stdev=289.70, samples=20 00:26:24.259 iops : min= 204, max= 537, avg=275.10, stdev=72.41, samples=20 00:26:24.259 lat (msec) : 20=1.55%, 50=39.93%, 100=55.04%, 250=3.47% 00:26:24.259 cpu : usr=39.51%, sys=0.60%, ctx=1093, majf=0, minf=9 00:26:24.259 IO depths : 1=0.9%, 2=2.0%, 4=8.2%, 8=76.0%, 16=13.0%, 32=0.0%, >=64=0.0% 00:26:24.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.259 complete : 0=0.0%, 4=89.4%, 8=6.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.259 issued rwts: total=2767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.259 filename0: (groupid=0, jobs=1): err= 0: pid=102540: Mon Dec 16 10:13:20 2024 00:26:24.259 read: IOPS=278, BW=1115KiB/s (1142kB/s)(10.9MiB/10040msec) 00:26:24.259 slat (usec): min=3, max=4034, avg=14.88, stdev=111.50 00:26:24.259 clat (msec): min=8, max=157, avg=57.22, stdev=24.75 00:26:24.259 lat (msec): min=8, max=157, avg=57.23, stdev=24.75 00:26:24.259 clat percentiles (msec): 00:26:24.259 | 1.00th=[ 14], 5.00th=[ 17], 10.00th=[ 23], 20.00th=[ 39], 00:26:24.259 | 30.00th=[ 44], 40.00th=[ 49], 50.00th=[ 56], 60.00th=[ 61], 00:26:24.259 | 70.00th=[ 68], 80.00th=[ 80], 90.00th=[ 88], 95.00th=[ 102], 00:26:24.259 | 99.00th=[ 131], 99.50th=[ 138], 99.90th=[ 159], 99.95th=[ 159], 00:26:24.259 | 99.99th=[ 159] 00:26:24.259 bw ( KiB/s): min= 896, max= 2888, per=4.48%, avg=1112.85, stdev=429.68, samples=20 00:26:24.259 iops : min= 224, max= 722, avg=278.20, stdev=107.42, samples=20 00:26:24.259 lat (msec) : 10=0.25%, 20=7.75%, 50=34.33%, 100=51.88%, 250=5.79% 00:26:24.259 cpu : usr=42.73%, sys=0.62%, ctx=1384, majf=0, minf=9 00:26:24.259 IO depths : 1=0.6%, 2=1.4%, 4=9.4%, 8=75.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:24.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.259 complete : 0=0.0%, 4=89.6%, 8=5.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.259 issued rwts: total=2799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.259 filename0: (groupid=0, jobs=1): err= 0: pid=102541: Mon Dec 16 10:13:20 2024 00:26:24.259 read: IOPS=231, BW=927KiB/s (950kB/s)(9276KiB/10002msec) 00:26:24.259 slat (usec): min=3, max=4043, avg=14.30, stdev=84.01 00:26:24.259 clat (msec): min=6, max=136, avg=68.92, stdev=20.80 00:26:24.259 lat (msec): min=6, max=136, avg=68.93, stdev=20.80 00:26:24.259 clat percentiles (msec): 00:26:24.259 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 56], 00:26:24.259 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 72], 00:26:24.259 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 108], 00:26:24.259 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 138], 99.95th=[ 138], 00:26:24.259 | 99.99th=[ 138] 00:26:24.259 bw ( KiB/s): min= 768, max= 1336, per=3.68%, avg=915.79, stdev=137.34, samples=19 00:26:24.259 iops : min= 192, max= 334, avg=228.95, stdev=34.33, samples=19 00:26:24.259 lat (msec) : 10=0.22%, 20=0.56%, 50=15.61%, 100=76.28%, 250=7.33% 00:26:24.259 cpu : usr=32.41%, sys=0.41%, ctx=921, majf=0, minf=9 00:26:24.259 IO depths : 1=1.1%, 2=3.3%, 4=12.5%, 8=70.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:24.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.259 complete : 0=0.0%, 4=91.0%, 8=4.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.259 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.259 filename0: (groupid=0, jobs=1): err= 0: pid=102542: Mon Dec 16 10:13:20 2024 00:26:24.259 read: IOPS=241, BW=968KiB/s (991kB/s)(9700KiB/10025msec) 00:26:24.259 slat (usec): min=3, max=8050, avg=19.46, stdev=230.23 00:26:24.259 clat (msec): min=23, max=141, avg=65.93, stdev=20.29 00:26:24.259 lat (msec): min=23, max=141, avg=65.95, stdev=20.29 00:26:24.259 clat percentiles (msec): 00:26:24.259 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 49], 00:26:24.259 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 70], 00:26:24.259 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 105], 00:26:24.259 | 99.00th=[ 120], 99.50th=[ 129], 99.90th=[ 142], 99.95th=[ 142], 00:26:24.259 | 99.99th=[ 142] 00:26:24.259 bw ( KiB/s): min= 768, max= 1536, per=3.89%, avg=966.40, stdev=166.25, samples=20 00:26:24.259 iops : min= 192, max= 384, avg=241.60, stdev=41.56, samples=20 00:26:24.259 lat (msec) : 50=22.14%, 100=72.16%, 250=5.69% 00:26:24.259 cpu : usr=33.50%, sys=0.47%, ctx=974, majf=0, minf=9 00:26:24.259 IO depths : 1=1.4%, 2=3.3%, 4=11.6%, 8=71.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:26:24.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.259 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.259 issued rwts: total=2425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.259 filename0: (groupid=0, jobs=1): err= 0: pid=102543: Mon Dec 16 10:13:20 2024 00:26:24.259 read: IOPS=247, BW=990KiB/s (1013kB/s)(9896KiB/10001msec) 00:26:24.259 slat (usec): min=4, max=4022, avg=16.54, stdev=123.40 00:26:24.259 clat (msec): min=10, max=141, avg=64.57, stdev=20.14 00:26:24.259 lat (msec): min=10, max=141, avg=64.58, stdev=20.14 00:26:24.260 clat percentiles (msec): 00:26:24.260 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 48], 00:26:24.260 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 68], 00:26:24.260 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 90], 95.00th=[ 103], 00:26:24.260 | 99.00th=[ 112], 99.50th=[ 125], 99.90th=[ 142], 99.95th=[ 142], 00:26:24.260 | 99.99th=[ 142] 00:26:24.260 bw ( KiB/s): min= 736, max= 1498, per=3.97%, avg=985.37, stdev=180.67, samples=19 00:26:24.260 iops : min= 184, max= 374, avg=246.32, stdev=45.09, samples=19 00:26:24.260 lat (msec) : 20=0.65%, 50=23.12%, 100=70.65%, 250=5.58% 00:26:24.260 cpu : usr=42.36%, sys=0.65%, ctx=1322, majf=0, minf=9 00:26:24.260 IO depths : 1=1.7%, 2=3.7%, 4=11.4%, 8=70.9%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:24.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.260 complete : 0=0.0%, 4=90.4%, 8=5.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.260 issued rwts: total=2474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.260 filename0: (groupid=0, jobs=1): err= 0: pid=102544: Mon Dec 16 10:13:20 2024 00:26:24.260 read: IOPS=236, BW=947KiB/s (970kB/s)(9480KiB/10009msec) 00:26:24.260 slat (usec): min=4, max=9028, avg=34.74, stdev=388.44 00:26:24.260 clat (msec): min=9, max=131, avg=67.33, stdev=21.18 00:26:24.260 lat (msec): min=9, max=131, avg=67.37, stdev=21.18 00:26:24.260 clat percentiles (msec): 00:26:24.260 | 1.00th=[ 20], 5.00th=[ 29], 10.00th=[ 37], 20.00th=[ 50], 00:26:24.260 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 72], 00:26:24.260 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 93], 95.00th=[ 101], 00:26:24.260 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 00:26:24.260 | 99.99th=[ 132] 00:26:24.260 bw ( KiB/s): min= 640, max= 1534, per=3.80%, avg=943.05, stdev=205.18, samples=19 00:26:24.260 iops : min= 160, max= 383, avg=235.74, stdev=51.22, samples=19 00:26:24.260 lat (msec) : 10=0.21%, 20=0.80%, 50=19.83%, 100=74.14%, 250=5.02% 00:26:24.260 cpu : usr=36.68%, sys=0.79%, ctx=1303, majf=0, minf=9 00:26:24.260 IO depths : 1=2.0%, 2=4.6%, 4=14.3%, 8=67.9%, 16=11.2%, 32=0.0%, >=64=0.0% 00:26:24.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.260 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.260 issued rwts: total=2370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.260 filename0: (groupid=0, jobs=1): err= 0: pid=102545: Mon Dec 16 10:13:20 2024 00:26:24.260 read: IOPS=294, BW=1180KiB/s (1208kB/s)(11.6MiB/10034msec) 00:26:24.260 slat (usec): min=4, max=4072, avg=17.88, stdev=165.31 00:26:24.260 clat (msec): min=7, max=123, avg=54.07, stdev=20.04 00:26:24.260 lat (msec): min=7, max=123, avg=54.09, stdev=20.04 00:26:24.260 clat percentiles (msec): 00:26:24.260 | 1.00th=[ 9], 5.00th=[ 25], 10.00th=[ 33], 20.00th=[ 40], 00:26:24.260 | 30.00th=[ 43], 40.00th=[ 47], 50.00th=[ 52], 60.00th=[ 58], 00:26:24.260 | 70.00th=[ 62], 80.00th=[ 68], 90.00th=[ 80], 95.00th=[ 95], 00:26:24.260 | 99.00th=[ 111], 99.50th=[ 113], 99.90th=[ 124], 99.95th=[ 124], 00:26:24.260 | 99.99th=[ 124] 00:26:24.260 bw ( KiB/s): min= 848, max= 2176, per=4.74%, avg=1177.60, stdev=279.04, samples=20 00:26:24.260 iops : min= 212, max= 544, avg=294.40, stdev=69.76, samples=20 00:26:24.260 lat (msec) : 10=1.08%, 20=1.18%, 50=45.30%, 100=48.48%, 250=3.95% 00:26:24.260 cpu : usr=44.99%, sys=0.58%, ctx=1145, majf=0, minf=9 00:26:24.260 IO depths : 1=0.7%, 2=1.5%, 4=7.7%, 8=77.4%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:24.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.260 complete : 0=0.0%, 4=89.5%, 8=5.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.260 issued rwts: total=2960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.260 filename0: (groupid=0, jobs=1): err= 0: pid=102546: Mon Dec 16 10:13:20 2024 00:26:24.260 read: IOPS=246, BW=985KiB/s (1008kB/s)(9876KiB/10028msec) 00:26:24.260 slat (usec): min=4, max=8022, avg=22.99, stdev=255.75 00:26:24.260 clat (msec): min=15, max=151, avg=64.74, stdev=21.82 00:26:24.260 lat (msec): min=15, max=151, avg=64.76, stdev=21.82 00:26:24.260 clat percentiles (msec): 00:26:24.260 | 1.00th=[ 18], 5.00th=[ 31], 10.00th=[ 37], 20.00th=[ 48], 00:26:24.260 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 67], 00:26:24.260 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 92], 95.00th=[ 101], 00:26:24.260 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 153], 00:26:24.260 | 99.99th=[ 153] 00:26:24.260 bw ( KiB/s): min= 768, max= 1792, per=3.95%, avg=981.10, stdev=222.86, samples=20 00:26:24.260 iops : min= 192, max= 448, avg=245.25, stdev=55.71, samples=20 00:26:24.260 lat (msec) : 20=1.90%, 50=22.28%, 100=71.12%, 250=4.70% 00:26:24.260 cpu : usr=40.41%, sys=0.66%, ctx=1097, majf=0, minf=9 00:26:24.260 IO depths : 1=2.3%, 2=5.3%, 4=14.2%, 8=67.3%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:24.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.260 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.260 issued rwts: total=2469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.260 filename1: (groupid=0, jobs=1): err= 0: pid=102547: Mon Dec 16 10:13:20 2024 00:26:24.260 read: IOPS=258, BW=1032KiB/s (1057kB/s)(10.1MiB/10031msec) 00:26:24.260 slat (usec): min=3, max=8023, avg=23.04, stdev=297.15 00:26:24.260 clat (msec): min=21, max=129, avg=61.88, stdev=20.17 00:26:24.260 lat (msec): min=21, max=129, avg=61.91, stdev=20.17 00:26:24.260 clat percentiles (msec): 00:26:24.260 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 46], 00:26:24.260 | 30.00th=[ 49], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 66], 00:26:24.260 | 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 87], 95.00th=[ 96], 00:26:24.260 | 99.00th=[ 121], 99.50th=[ 128], 99.90th=[ 130], 99.95th=[ 130], 00:26:24.260 | 99.99th=[ 130] 00:26:24.260 bw ( KiB/s): min= 768, max= 1792, per=4.14%, avg=1028.80, stdev=214.31, samples=20 00:26:24.260 iops : min= 192, max= 448, avg=257.20, stdev=53.58, samples=20 00:26:24.260 lat (msec) : 50=31.22%, 100=65.03%, 250=3.75% 00:26:24.260 cpu : usr=32.75%, sys=0.43%, ctx=869, majf=0, minf=9 00:26:24.260 IO depths : 1=1.2%, 2=2.8%, 4=9.7%, 8=73.4%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:24.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.260 complete : 0=0.0%, 4=90.2%, 8=5.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.260 issued rwts: total=2588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.260 filename1: (groupid=0, jobs=1): err= 0: pid=102548: Mon Dec 16 10:13:20 2024 00:26:24.260 read: IOPS=278, BW=1113KiB/s (1140kB/s)(10.9MiB/10031msec) 00:26:24.260 slat (usec): min=3, max=8021, avg=18.23, stdev=215.71 00:26:24.260 clat (msec): min=25, max=137, avg=57.34, stdev=18.21 00:26:24.260 lat (msec): min=25, max=137, avg=57.35, stdev=18.21 00:26:24.260 clat percentiles (msec): 00:26:24.260 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 41], 00:26:24.260 | 30.00th=[ 45], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 61], 00:26:24.260 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 91], 00:26:24.260 | 99.00th=[ 107], 99.50th=[ 114], 99.90th=[ 138], 99.95th=[ 138], 00:26:24.260 | 99.99th=[ 138] 00:26:24.260 bw ( KiB/s): min= 896, max= 1584, per=4.47%, avg=1110.00, stdev=205.44, samples=20 00:26:24.260 iops : min= 224, max= 396, avg=277.50, stdev=51.36, samples=20 00:26:24.260 lat (msec) : 50=41.20%, 100=56.57%, 250=2.22% 00:26:24.260 cpu : usr=42.64%, sys=0.75%, ctx=1310, majf=0, minf=9 00:26:24.260 IO depths : 1=0.9%, 2=2.0%, 4=8.6%, 8=76.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:24.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.260 complete : 0=0.0%, 4=89.5%, 8=5.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.260 issued rwts: total=2791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.260 filename1: (groupid=0, jobs=1): err= 0: pid=102549: Mon Dec 16 10:13:20 2024 00:26:24.260 read: IOPS=252, BW=1010KiB/s (1035kB/s)(9.89MiB/10024msec) 00:26:24.260 slat (usec): min=3, max=8029, avg=18.34, stdev=225.27 00:26:24.260 clat (msec): min=21, max=164, avg=63.16, stdev=20.43 00:26:24.260 lat (msec): min=21, max=164, avg=63.18, stdev=20.43 00:26:24.260 clat percentiles (msec): 00:26:24.260 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 47], 00:26:24.260 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 67], 00:26:24.260 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 90], 95.00th=[ 96], 00:26:24.260 | 99.00th=[ 120], 99.50th=[ 128], 99.90th=[ 165], 99.95th=[ 165], 00:26:24.260 | 99.99th=[ 165] 00:26:24.260 bw ( KiB/s): min= 760, max= 1667, per=4.05%, avg=1006.45, stdev=200.54, samples=20 00:26:24.260 iops : min= 190, max= 416, avg=251.55, stdev=50.03, samples=20 00:26:24.260 lat (msec) : 50=30.25%, 100=65.24%, 250=4.50% 00:26:24.260 cpu : usr=33.54%, sys=0.43%, ctx=963, majf=0, minf=9 00:26:24.260 IO depths : 1=1.0%, 2=2.3%, 4=9.4%, 8=74.5%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:24.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.260 complete : 0=0.0%, 4=89.9%, 8=5.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.260 issued rwts: total=2532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.260 filename1: (groupid=0, jobs=1): err= 0: pid=102550: Mon Dec 16 10:13:20 2024 00:26:24.260 read: IOPS=309, BW=1236KiB/s (1266kB/s)(12.1MiB/10047msec) 00:26:24.261 slat (usec): min=3, max=8027, avg=21.46, stdev=245.65 00:26:24.261 clat (msec): min=5, max=117, avg=51.61, stdev=18.08 00:26:24.261 lat (msec): min=5, max=117, avg=51.63, stdev=18.08 00:26:24.261 clat percentiles (msec): 00:26:24.261 | 1.00th=[ 15], 5.00th=[ 22], 10.00th=[ 32], 20.00th=[ 39], 00:26:24.261 | 30.00th=[ 42], 40.00th=[ 47], 50.00th=[ 51], 60.00th=[ 56], 00:26:24.261 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 74], 95.00th=[ 84], 00:26:24.261 | 99.00th=[ 102], 99.50th=[ 112], 99.90th=[ 118], 99.95th=[ 118], 00:26:24.261 | 99.99th=[ 118] 00:26:24.261 bw ( KiB/s): min= 896, max= 2496, per=4.98%, avg=1236.10, stdev=323.10, samples=20 00:26:24.261 iops : min= 224, max= 624, avg=308.90, stdev=80.80, samples=20 00:26:24.261 lat (msec) : 10=0.52%, 20=3.74%, 50=45.57%, 100=48.92%, 250=1.26% 00:26:24.261 cpu : usr=43.62%, sys=0.69%, ctx=1524, majf=0, minf=9 00:26:24.261 IO depths : 1=1.0%, 2=2.1%, 4=9.1%, 8=75.4%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:24.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.261 complete : 0=0.0%, 4=89.8%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.261 issued rwts: total=3105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.261 filename1: (groupid=0, jobs=1): err= 0: pid=102551: Mon Dec 16 10:13:20 2024 00:26:24.261 read: IOPS=287, BW=1151KiB/s (1178kB/s)(11.3MiB/10062msec) 00:26:24.261 slat (usec): min=4, max=8020, avg=18.78, stdev=217.92 00:26:24.261 clat (usec): min=1045, max=130838, avg=55469.74, stdev=23026.19 00:26:24.261 lat (usec): min=1055, max=130847, avg=55488.52, stdev=23030.91 00:26:24.261 clat percentiles (usec): 00:26:24.261 | 1.00th=[ 1369], 5.00th=[ 14746], 10.00th=[ 25297], 20.00th=[ 37487], 00:26:24.261 | 30.00th=[ 44303], 40.00th=[ 48497], 50.00th=[ 57410], 60.00th=[ 60031], 00:26:24.261 | 70.00th=[ 66847], 80.00th=[ 71828], 90.00th=[ 84411], 95.00th=[ 95945], 00:26:24.261 | 99.00th=[107480], 99.50th=[120062], 99.90th=[130548], 99.95th=[130548], 00:26:24.261 | 99.99th=[130548] 00:26:24.261 bw ( KiB/s): min= 816, max= 3264, per=4.63%, avg=1151.45, stdev=516.38, samples=20 00:26:24.261 iops : min= 204, max= 816, avg=287.85, stdev=129.09, samples=20 00:26:24.261 lat (msec) : 2=1.66%, 4=0.55%, 10=1.11%, 20=5.46%, 50=32.95% 00:26:24.261 lat (msec) : 100=54.96%, 250=3.32% 00:26:24.261 cpu : usr=37.47%, sys=0.94%, ctx=1483, majf=0, minf=9 00:26:24.261 IO depths : 1=0.8%, 2=1.8%, 4=8.2%, 8=76.4%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:24.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.261 complete : 0=0.0%, 4=89.6%, 8=5.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.261 issued rwts: total=2895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.261 filename1: (groupid=0, jobs=1): err= 0: pid=102552: Mon Dec 16 10:13:20 2024 00:26:24.261 read: IOPS=282, BW=1129KiB/s (1157kB/s)(11.1MiB/10037msec) 00:26:24.261 slat (usec): min=4, max=8021, avg=19.63, stdev=204.51 00:26:24.261 clat (msec): min=13, max=139, avg=56.44, stdev=20.62 00:26:24.261 lat (msec): min=13, max=139, avg=56.46, stdev=20.63 00:26:24.261 clat percentiles (msec): 00:26:24.261 | 1.00th=[ 17], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 41], 00:26:24.261 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 60], 00:26:24.261 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 87], 95.00th=[ 93], 00:26:24.261 | 99.00th=[ 113], 99.50th=[ 123], 99.90th=[ 140], 99.95th=[ 140], 00:26:24.261 | 99.99th=[ 140] 00:26:24.261 bw ( KiB/s): min= 768, max= 2116, per=4.55%, avg=1131.05, stdev=277.79, samples=20 00:26:24.261 iops : min= 192, max= 529, avg=282.75, stdev=69.45, samples=20 00:26:24.261 lat (msec) : 20=1.98%, 50=43.08%, 100=52.05%, 250=2.89% 00:26:24.261 cpu : usr=43.65%, sys=0.72%, ctx=1417, majf=0, minf=9 00:26:24.261 IO depths : 1=0.7%, 2=1.8%, 4=8.0%, 8=76.3%, 16=13.2%, 32=0.0%, >=64=0.0% 00:26:24.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.261 complete : 0=0.0%, 4=89.7%, 8=6.1%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.261 issued rwts: total=2834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.261 filename1: (groupid=0, jobs=1): err= 0: pid=102553: Mon Dec 16 10:13:20 2024 00:26:24.261 read: IOPS=241, BW=965KiB/s (988kB/s)(9660KiB/10009msec) 00:26:24.261 slat (usec): min=3, max=8262, avg=17.59, stdev=183.53 00:26:24.261 clat (msec): min=19, max=151, avg=66.18, stdev=19.98 00:26:24.261 lat (msec): min=19, max=151, avg=66.20, stdev=19.98 00:26:24.261 clat percentiles (msec): 00:26:24.261 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:26:24.261 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:26:24.261 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 93], 95.00th=[ 105], 00:26:24.261 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 153], 99.95th=[ 153], 00:26:24.261 | 99.99th=[ 153] 00:26:24.261 bw ( KiB/s): min= 768, max= 1466, per=3.86%, avg=959.70, stdev=162.84, samples=20 00:26:24.261 iops : min= 192, max= 366, avg=239.90, stdev=40.63, samples=20 00:26:24.261 lat (msec) : 20=0.46%, 50=21.45%, 100=72.67%, 250=5.42% 00:26:24.261 cpu : usr=32.28%, sys=0.49%, ctx=922, majf=0, minf=9 00:26:24.261 IO depths : 1=1.1%, 2=2.4%, 4=10.4%, 8=73.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:24.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.261 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.261 issued rwts: total=2415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.261 filename1: (groupid=0, jobs=1): err= 0: pid=102554: Mon Dec 16 10:13:20 2024 00:26:24.261 read: IOPS=234, BW=937KiB/s (960kB/s)(9396KiB/10025msec) 00:26:24.261 slat (usec): min=3, max=8027, avg=19.06, stdev=233.42 00:26:24.261 clat (msec): min=20, max=128, avg=68.11, stdev=21.07 00:26:24.261 lat (msec): min=20, max=128, avg=68.13, stdev=21.07 00:26:24.261 clat percentiles (msec): 00:26:24.261 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 43], 20.00th=[ 50], 00:26:24.261 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:26:24.261 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 95], 95.00th=[ 107], 00:26:24.261 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 129], 99.95th=[ 129], 00:26:24.261 | 99.99th=[ 129] 00:26:24.261 bw ( KiB/s): min= 722, max= 1667, per=3.78%, avg=938.05, stdev=198.14, samples=20 00:26:24.261 iops : min= 180, max= 416, avg=234.45, stdev=49.42, samples=20 00:26:24.261 lat (msec) : 50=20.86%, 100=72.84%, 250=6.30% 00:26:24.261 cpu : usr=32.59%, sys=0.57%, ctx=883, majf=0, minf=9 00:26:24.261 IO depths : 1=1.1%, 2=2.9%, 4=11.7%, 8=71.8%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:24.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.261 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.261 issued rwts: total=2349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.261 filename2: (groupid=0, jobs=1): err= 0: pid=102555: Mon Dec 16 10:13:20 2024 00:26:24.261 read: IOPS=251, BW=1005KiB/s (1030kB/s)(9.84MiB/10025msec) 00:26:24.261 slat (usec): min=3, max=11110, avg=32.94, stdev=419.77 00:26:24.261 clat (msec): min=23, max=123, avg=63.39, stdev=18.95 00:26:24.261 lat (msec): min=23, max=123, avg=63.43, stdev=18.95 00:26:24.261 clat percentiles (msec): 00:26:24.261 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 47], 00:26:24.261 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 67], 00:26:24.261 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 87], 95.00th=[ 99], 00:26:24.261 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 125], 99.95th=[ 125], 00:26:24.261 | 99.99th=[ 125] 00:26:24.261 bw ( KiB/s): min= 768, max= 1415, per=4.04%, avg=1003.15, stdev=166.38, samples=20 00:26:24.261 iops : min= 192, max= 353, avg=250.75, stdev=41.50, samples=20 00:26:24.261 lat (msec) : 50=27.86%, 100=68.29%, 250=3.85% 00:26:24.261 cpu : usr=32.27%, sys=0.54%, ctx=939, majf=0, minf=9 00:26:24.261 IO depths : 1=1.1%, 2=2.5%, 4=10.2%, 8=74.0%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:24.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.261 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.261 issued rwts: total=2520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.261 filename2: (groupid=0, jobs=1): err= 0: pid=102556: Mon Dec 16 10:13:20 2024 00:26:24.261 read: IOPS=236, BW=945KiB/s (968kB/s)(9452KiB/10002msec) 00:26:24.261 slat (usec): min=4, max=8030, avg=20.39, stdev=241.47 00:26:24.261 clat (msec): min=23, max=138, avg=67.56, stdev=21.05 00:26:24.261 lat (msec): min=23, max=138, avg=67.58, stdev=21.04 00:26:24.261 clat percentiles (msec): 00:26:24.261 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 50], 00:26:24.261 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 70], 00:26:24.261 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 107], 00:26:24.261 | 99.00th=[ 124], 99.50th=[ 126], 99.90th=[ 140], 99.95th=[ 140], 00:26:24.261 | 99.99th=[ 140] 00:26:24.261 bw ( KiB/s): min= 768, max= 1424, per=3.78%, avg=938.53, stdev=156.77, samples=19 00:26:24.261 iops : min= 192, max= 356, avg=234.63, stdev=39.19, samples=19 00:26:24.261 lat (msec) : 50=20.69%, 100=72.03%, 250=7.28% 00:26:24.261 cpu : usr=35.95%, sys=0.60%, ctx=1192, majf=0, minf=9 00:26:24.261 IO depths : 1=1.8%, 2=4.3%, 4=13.0%, 8=69.2%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:24.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.261 complete : 0=0.0%, 4=91.0%, 8=4.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.261 issued rwts: total=2363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.261 filename2: (groupid=0, jobs=1): err= 0: pid=102557: Mon Dec 16 10:13:20 2024 00:26:24.261 read: IOPS=258, BW=1035KiB/s (1060kB/s)(10.1MiB/10033msec) 00:26:24.261 slat (usec): min=4, max=8036, avg=18.62, stdev=222.58 00:26:24.261 clat (msec): min=14, max=148, avg=61.67, stdev=21.45 00:26:24.261 lat (msec): min=14, max=148, avg=61.69, stdev=21.45 00:26:24.261 clat percentiles (msec): 00:26:24.261 | 1.00th=[ 17], 5.00th=[ 29], 10.00th=[ 35], 20.00th=[ 46], 00:26:24.261 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 64], 00:26:24.261 | 70.00th=[ 71], 80.00th=[ 79], 90.00th=[ 90], 95.00th=[ 99], 00:26:24.261 | 99.00th=[ 123], 99.50th=[ 128], 99.90th=[ 148], 99.95th=[ 148], 00:26:24.261 | 99.99th=[ 148] 00:26:24.261 bw ( KiB/s): min= 736, max= 2048, per=4.15%, avg=1031.85, stdev=274.84, samples=20 00:26:24.261 iops : min= 184, max= 512, avg=257.95, stdev=68.70, samples=20 00:26:24.261 lat (msec) : 20=2.39%, 50=23.57%, 100=70.07%, 250=3.97% 00:26:24.261 cpu : usr=45.75%, sys=0.78%, ctx=1170, majf=0, minf=9 00:26:24.261 IO depths : 1=1.5%, 2=3.5%, 4=11.4%, 8=71.3%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:24.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.262 complete : 0=0.0%, 4=90.5%, 8=5.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.262 issued rwts: total=2596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.262 filename2: (groupid=0, jobs=1): err= 0: pid=102558: Mon Dec 16 10:13:20 2024 00:26:24.262 read: IOPS=267, BW=1068KiB/s (1094kB/s)(10.5MiB/10021msec) 00:26:24.262 slat (usec): min=4, max=8017, avg=17.85, stdev=218.93 00:26:24.262 clat (msec): min=23, max=134, avg=59.79, stdev=19.66 00:26:24.262 lat (msec): min=23, max=134, avg=59.81, stdev=19.66 00:26:24.262 clat percentiles (msec): 00:26:24.262 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 38], 20.00th=[ 43], 00:26:24.262 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 61], 00:26:24.262 | 70.00th=[ 67], 80.00th=[ 77], 90.00th=[ 87], 95.00th=[ 99], 00:26:24.262 | 99.00th=[ 110], 99.50th=[ 116], 99.90th=[ 136], 99.95th=[ 136], 00:26:24.262 | 99.99th=[ 136] 00:26:24.262 bw ( KiB/s): min= 801, max= 1376, per=4.29%, avg=1066.90, stdev=171.27, samples=20 00:26:24.262 iops : min= 200, max= 344, avg=266.70, stdev=42.83, samples=20 00:26:24.262 lat (msec) : 50=36.55%, 100=58.86%, 250=4.60% 00:26:24.262 cpu : usr=39.11%, sys=0.56%, ctx=1102, majf=0, minf=9 00:26:24.262 IO depths : 1=0.7%, 2=1.6%, 4=7.8%, 8=76.9%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:24.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.262 complete : 0=0.0%, 4=89.5%, 8=6.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.262 issued rwts: total=2676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.262 filename2: (groupid=0, jobs=1): err= 0: pid=102559: Mon Dec 16 10:13:20 2024 00:26:24.262 read: IOPS=289, BW=1159KiB/s (1187kB/s)(11.3MiB/10009msec) 00:26:24.262 slat (usec): min=4, max=8017, avg=20.83, stdev=223.30 00:26:24.262 clat (msec): min=21, max=144, avg=55.09, stdev=17.94 00:26:24.262 lat (msec): min=21, max=144, avg=55.11, stdev=17.95 00:26:24.262 clat percentiles (msec): 00:26:24.262 | 1.00th=[ 30], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 40], 00:26:24.262 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 58], 00:26:24.262 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 83], 95.00th=[ 87], 00:26:24.262 | 99.00th=[ 102], 99.50th=[ 110], 99.90th=[ 144], 99.95th=[ 144], 00:26:24.262 | 99.99th=[ 144] 00:26:24.262 bw ( KiB/s): min= 896, max= 1536, per=4.64%, avg=1153.45, stdev=167.21, samples=20 00:26:24.262 iops : min= 224, max= 384, avg=288.35, stdev=41.79, samples=20 00:26:24.262 lat (msec) : 50=46.48%, 100=52.34%, 250=1.17% 00:26:24.262 cpu : usr=44.37%, sys=0.67%, ctx=1210, majf=0, minf=9 00:26:24.262 IO depths : 1=1.0%, 2=2.4%, 4=9.3%, 8=74.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:24.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.262 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.262 issued rwts: total=2900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.262 filename2: (groupid=0, jobs=1): err= 0: pid=102560: Mon Dec 16 10:13:20 2024 00:26:24.262 read: IOPS=250, BW=1001KiB/s (1025kB/s)(9.78MiB/10001msec) 00:26:24.262 slat (usec): min=3, max=8038, avg=19.03, stdev=226.69 00:26:24.262 clat (msec): min=21, max=119, avg=63.80, stdev=20.57 00:26:24.262 lat (msec): min=21, max=119, avg=63.82, stdev=20.57 00:26:24.262 clat percentiles (msec): 00:26:24.262 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 36], 20.00th=[ 48], 00:26:24.262 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 69], 00:26:24.262 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 100], 00:26:24.262 | 99.00th=[ 114], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:26:24.262 | 99.99th=[ 121] 00:26:24.262 bw ( KiB/s): min= 816, max= 1829, per=4.02%, avg=999.42, stdev=221.37, samples=19 00:26:24.262 iops : min= 204, max= 457, avg=249.84, stdev=55.29, samples=19 00:26:24.262 lat (msec) : 50=27.09%, 100=68.12%, 250=4.79% 00:26:24.262 cpu : usr=32.85%, sys=0.30%, ctx=872, majf=0, minf=9 00:26:24.262 IO depths : 1=1.6%, 2=3.6%, 4=11.6%, 8=71.8%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:24.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.262 complete : 0=0.0%, 4=90.6%, 8=4.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.262 issued rwts: total=2503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.262 filename2: (groupid=0, jobs=1): err= 0: pid=102561: Mon Dec 16 10:13:20 2024 00:26:24.262 read: IOPS=227, BW=912KiB/s (933kB/s)(9128KiB/10013msec) 00:26:24.262 slat (usec): min=3, max=9049, avg=19.59, stdev=252.82 00:26:24.262 clat (msec): min=20, max=147, avg=70.04, stdev=22.20 00:26:24.262 lat (msec): min=20, max=147, avg=70.06, stdev=22.20 00:26:24.262 clat percentiles (msec): 00:26:24.262 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 55], 00:26:24.262 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:26:24.262 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 108], 00:26:24.262 | 99.00th=[ 136], 99.50th=[ 146], 99.90th=[ 148], 99.95th=[ 148], 00:26:24.262 | 99.99th=[ 148] 00:26:24.262 bw ( KiB/s): min= 752, max= 1536, per=3.65%, avg=906.40, stdev=177.11, samples=20 00:26:24.262 iops : min= 188, max= 384, avg=226.60, stdev=44.28, samples=20 00:26:24.262 lat (msec) : 50=16.74%, 100=75.24%, 250=8.02% 00:26:24.262 cpu : usr=32.67%, sys=0.44%, ctx=897, majf=0, minf=9 00:26:24.262 IO depths : 1=2.4%, 2=5.7%, 4=17.0%, 8=64.3%, 16=10.6%, 32=0.0%, >=64=0.0% 00:26:24.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.262 complete : 0=0.0%, 4=91.8%, 8=2.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.262 issued rwts: total=2282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.262 filename2: (groupid=0, jobs=1): err= 0: pid=102562: Mon Dec 16 10:13:20 2024 00:26:24.262 read: IOPS=254, BW=1017KiB/s (1042kB/s)(9.95MiB/10012msec) 00:26:24.262 slat (nsec): min=4747, max=41776, avg=12132.88, stdev=6737.27 00:26:24.262 clat (msec): min=19, max=135, avg=62.82, stdev=19.91 00:26:24.262 lat (msec): min=19, max=135, avg=62.83, stdev=19.91 00:26:24.262 clat percentiles (msec): 00:26:24.262 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 47], 00:26:24.262 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 65], 00:26:24.262 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 88], 95.00th=[ 99], 00:26:24.262 | 99.00th=[ 126], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 136], 00:26:24.262 | 99.99th=[ 136] 00:26:24.262 bw ( KiB/s): min= 816, max= 1456, per=4.08%, avg=1014.40, stdev=163.70, samples=20 00:26:24.262 iops : min= 204, max= 364, avg=253.60, stdev=40.92, samples=20 00:26:24.262 lat (msec) : 20=0.24%, 50=28.28%, 100=66.93%, 250=4.56% 00:26:24.262 cpu : usr=42.78%, sys=0.56%, ctx=1296, majf=0, minf=9 00:26:24.262 IO depths : 1=1.6%, 2=3.8%, 4=12.1%, 8=70.7%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:24.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.262 complete : 0=0.0%, 4=90.7%, 8=4.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.262 issued rwts: total=2546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.262 00:26:24.262 Run status group 0 (all jobs): 00:26:24.262 READ: bw=24.3MiB/s (25.4MB/s), 912KiB/s-1236KiB/s (933kB/s-1266kB/s), io=244MiB (256MB), run=10001-10062msec 00:26:24.262 10:13:20 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:24.262 10:13:20 -- target/dif.sh@43 -- # local sub 00:26:24.262 10:13:20 -- target/dif.sh@45 -- # for sub in "$@" 00:26:24.262 10:13:20 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:24.262 10:13:20 -- target/dif.sh@36 -- # local sub_id=0 00:26:24.262 10:13:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:24.262 10:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.262 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:26:24.262 10:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.262 10:13:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:24.262 10:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.262 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:26:24.262 10:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.262 10:13:20 -- target/dif.sh@45 -- # for sub in "$@" 00:26:24.262 10:13:20 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:24.262 10:13:20 -- target/dif.sh@36 -- # local sub_id=1 00:26:24.262 10:13:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:24.262 10:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.262 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:26:24.262 10:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.262 10:13:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:24.262 10:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.262 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:26:24.262 10:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.262 10:13:20 -- target/dif.sh@45 -- # for sub in "$@" 00:26:24.262 10:13:20 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:24.262 10:13:20 -- target/dif.sh@36 -- # local sub_id=2 00:26:24.262 10:13:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:24.262 10:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.262 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:26:24.262 10:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.262 10:13:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:24.262 10:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.262 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:26:24.262 10:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.262 10:13:20 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:24.262 10:13:20 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:24.262 10:13:20 -- target/dif.sh@115 -- # numjobs=2 00:26:24.262 10:13:20 -- target/dif.sh@115 -- # iodepth=8 00:26:24.263 10:13:20 -- target/dif.sh@115 -- # runtime=5 00:26:24.263 10:13:20 -- target/dif.sh@115 -- # files=1 00:26:24.263 10:13:20 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:24.263 10:13:20 -- target/dif.sh@28 -- # local sub 00:26:24.263 10:13:20 -- target/dif.sh@30 -- # for sub in "$@" 00:26:24.263 10:13:20 -- target/dif.sh@31 -- # create_subsystem 0 00:26:24.263 10:13:20 -- target/dif.sh@18 -- # local sub_id=0 00:26:24.263 10:13:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:24.263 10:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.263 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:26:24.263 bdev_null0 00:26:24.263 10:13:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.263 10:13:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:24.263 10:13:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.263 10:13:21 -- common/autotest_common.sh@10 -- # set +x 00:26:24.263 10:13:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.263 10:13:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:24.263 10:13:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.263 10:13:21 -- common/autotest_common.sh@10 -- # set +x 00:26:24.263 10:13:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.263 10:13:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:24.263 10:13:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.263 10:13:21 -- common/autotest_common.sh@10 -- # set +x 00:26:24.263 [2024-12-16 10:13:21.025674] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.263 10:13:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.263 10:13:21 -- target/dif.sh@30 -- # for sub in "$@" 00:26:24.263 10:13:21 -- target/dif.sh@31 -- # create_subsystem 1 00:26:24.263 10:13:21 -- target/dif.sh@18 -- # local sub_id=1 00:26:24.263 10:13:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:24.263 10:13:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.263 10:13:21 -- common/autotest_common.sh@10 -- # set +x 00:26:24.263 bdev_null1 00:26:24.263 10:13:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.263 10:13:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:24.263 10:13:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.263 10:13:21 -- common/autotest_common.sh@10 -- # set +x 00:26:24.263 10:13:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.263 10:13:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:24.263 10:13:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.263 10:13:21 -- common/autotest_common.sh@10 -- # set +x 00:26:24.263 10:13:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.263 10:13:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.263 10:13:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.263 10:13:21 -- common/autotest_common.sh@10 -- # set +x 00:26:24.263 10:13:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.263 10:13:21 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:24.263 10:13:21 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:24.263 10:13:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:24.263 10:13:21 -- nvmf/common.sh@520 -- # config=() 00:26:24.263 10:13:21 -- target/dif.sh@82 -- # gen_fio_conf 00:26:24.263 10:13:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.263 10:13:21 -- nvmf/common.sh@520 -- # local subsystem config 00:26:24.263 10:13:21 -- target/dif.sh@54 -- # local file 00:26:24.263 10:13:21 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.263 10:13:21 -- target/dif.sh@56 -- # cat 00:26:24.263 10:13:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.263 10:13:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.263 { 00:26:24.263 "params": { 00:26:24.263 "name": "Nvme$subsystem", 00:26:24.263 "trtype": "$TEST_TRANSPORT", 00:26:24.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.263 "adrfam": "ipv4", 00:26:24.263 "trsvcid": "$NVMF_PORT", 00:26:24.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.263 "hdgst": ${hdgst:-false}, 00:26:24.263 "ddgst": ${ddgst:-false} 00:26:24.263 }, 00:26:24.263 "method": "bdev_nvme_attach_controller" 00:26:24.263 } 00:26:24.263 EOF 00:26:24.263 )") 00:26:24.263 10:13:21 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:24.263 10:13:21 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:24.263 10:13:21 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:24.263 10:13:21 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:24.263 10:13:21 -- common/autotest_common.sh@1330 -- # shift 00:26:24.263 10:13:21 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:24.263 10:13:21 -- nvmf/common.sh@542 -- # cat 00:26:24.263 10:13:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:24.263 10:13:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:24.263 10:13:21 -- target/dif.sh@72 -- # (( file <= files )) 00:26:24.263 10:13:21 -- target/dif.sh@73 -- # cat 00:26:24.263 10:13:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:24.263 10:13:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:24.263 10:13:21 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:24.263 10:13:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.263 10:13:21 -- target/dif.sh@72 -- # (( file++ )) 00:26:24.263 10:13:21 -- target/dif.sh@72 -- # (( file <= files )) 00:26:24.263 10:13:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.263 { 00:26:24.263 "params": { 00:26:24.263 "name": "Nvme$subsystem", 00:26:24.263 "trtype": "$TEST_TRANSPORT", 00:26:24.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.263 "adrfam": "ipv4", 00:26:24.263 "trsvcid": "$NVMF_PORT", 00:26:24.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.263 "hdgst": ${hdgst:-false}, 00:26:24.263 "ddgst": ${ddgst:-false} 00:26:24.263 }, 00:26:24.263 "method": "bdev_nvme_attach_controller" 00:26:24.263 } 00:26:24.263 EOF 00:26:24.263 )") 00:26:24.263 10:13:21 -- nvmf/common.sh@542 -- # cat 00:26:24.263 10:13:21 -- nvmf/common.sh@544 -- # jq . 00:26:24.263 10:13:21 -- nvmf/common.sh@545 -- # IFS=, 00:26:24.263 10:13:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:24.263 "params": { 00:26:24.263 "name": "Nvme0", 00:26:24.263 "trtype": "tcp", 00:26:24.263 "traddr": "10.0.0.2", 00:26:24.263 "adrfam": "ipv4", 00:26:24.263 "trsvcid": "4420", 00:26:24.263 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:24.263 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:24.263 "hdgst": false, 00:26:24.263 "ddgst": false 00:26:24.263 }, 00:26:24.263 "method": "bdev_nvme_attach_controller" 00:26:24.263 },{ 00:26:24.263 "params": { 00:26:24.263 "name": "Nvme1", 00:26:24.263 "trtype": "tcp", 00:26:24.263 "traddr": "10.0.0.2", 00:26:24.263 "adrfam": "ipv4", 00:26:24.263 "trsvcid": "4420", 00:26:24.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:24.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:24.263 "hdgst": false, 00:26:24.263 "ddgst": false 00:26:24.263 }, 00:26:24.263 "method": "bdev_nvme_attach_controller" 00:26:24.263 }' 00:26:24.263 10:13:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:24.263 10:13:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:24.263 10:13:21 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:24.263 10:13:21 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:24.263 10:13:21 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:24.263 10:13:21 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:24.263 10:13:21 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:24.263 10:13:21 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:24.263 10:13:21 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:24.263 10:13:21 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.263 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:24.263 ... 00:26:24.263 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:24.263 ... 00:26:24.263 fio-3.35 00:26:24.263 Starting 4 threads 00:26:24.264 [2024-12-16 10:13:21.751566] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:24.264 [2024-12-16 10:13:21.751834] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:28.454 00:26:28.454 filename0: (groupid=0, jobs=1): err= 0: pid=102699: Mon Dec 16 10:13:26 2024 00:26:28.454 read: IOPS=2192, BW=17.1MiB/s (18.0MB/s)(85.7MiB/5003msec) 00:26:28.454 slat (nsec): min=6645, max=54919, avg=13379.73, stdev=4960.47 00:26:28.454 clat (usec): min=2574, max=4783, avg=3587.52, stdev=165.61 00:26:28.454 lat (usec): min=2586, max=4796, avg=3600.90, stdev=165.71 00:26:28.454 clat percentiles (usec): 00:26:28.454 | 1.00th=[ 3326], 5.00th=[ 3359], 10.00th=[ 3392], 20.00th=[ 3458], 00:26:28.454 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3621], 00:26:28.454 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 3818], 95.00th=[ 3884], 00:26:28.454 | 99.00th=[ 4015], 99.50th=[ 4080], 99.90th=[ 4228], 99.95th=[ 4359], 00:26:28.454 | 99.99th=[ 4686] 00:26:28.454 bw ( KiB/s): min=16768, max=18176, per=24.84%, avg=17482.78, stdev=502.75, samples=9 00:26:28.454 iops : min= 2096, max= 2272, avg=2185.33, stdev=62.86, samples=9 00:26:28.454 lat (msec) : 4=98.64%, 10=1.36% 00:26:28.454 cpu : usr=95.06%, sys=3.78%, ctx=7, majf=0, minf=0 00:26:28.454 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:28.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.454 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.454 issued rwts: total=10968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.454 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:28.454 filename0: (groupid=0, jobs=1): err= 0: pid=102700: Mon Dec 16 10:13:26 2024 00:26:28.454 read: IOPS=2193, BW=17.1MiB/s (18.0MB/s)(85.7MiB/5001msec) 00:26:28.454 slat (nsec): min=6410, max=53751, avg=14790.95, stdev=4493.64 00:26:28.454 clat (usec): min=993, max=5906, avg=3574.30, stdev=199.05 00:26:28.454 lat (usec): min=999, max=5919, avg=3589.09, stdev=199.71 00:26:28.454 clat percentiles (usec): 00:26:28.454 | 1.00th=[ 3294], 5.00th=[ 3359], 10.00th=[ 3392], 20.00th=[ 3425], 00:26:28.454 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:26:28.454 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 3818], 95.00th=[ 3884], 00:26:28.454 | 99.00th=[ 4015], 99.50th=[ 4080], 99.90th=[ 5342], 99.95th=[ 5669], 00:26:28.454 | 99.99th=[ 5866] 00:26:28.454 bw ( KiB/s): min=16673, max=18304, per=24.84%, avg=17482.78, stdev=529.50, samples=9 00:26:28.454 iops : min= 2084, max= 2288, avg=2185.33, stdev=66.21, samples=9 00:26:28.454 lat (usec) : 1000=0.01% 00:26:28.454 lat (msec) : 2=0.08%, 4=98.70%, 10=1.21% 00:26:28.454 cpu : usr=94.74%, sys=4.14%, ctx=6, majf=0, minf=0 00:26:28.454 IO depths : 1=12.2%, 2=25.0%, 4=50.0%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:28.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.454 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.454 issued rwts: total=10968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.454 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:28.454 filename1: (groupid=0, jobs=1): err= 0: pid=102701: Mon Dec 16 10:13:26 2024 00:26:28.454 read: IOPS=2191, BW=17.1MiB/s (18.0MB/s)(85.6MiB/5001msec) 00:26:28.454 slat (nsec): min=4681, max=53693, avg=15080.66, stdev=4467.46 00:26:28.454 clat (usec): min=2614, max=6220, avg=3577.07, stdev=177.53 00:26:28.454 lat (usec): min=2625, max=6235, avg=3592.15, stdev=177.92 00:26:28.454 clat percentiles (usec): 00:26:28.454 | 1.00th=[ 3294], 5.00th=[ 3359], 10.00th=[ 3392], 20.00th=[ 3425], 00:26:28.454 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:26:28.454 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 3818], 95.00th=[ 3884], 00:26:28.454 | 99.00th=[ 4015], 99.50th=[ 4080], 99.90th=[ 4555], 99.95th=[ 5800], 00:26:28.454 | 99.99th=[ 5800] 00:26:28.454 bw ( KiB/s): min=16640, max=18304, per=24.83%, avg=17479.11, stdev=535.89, samples=9 00:26:28.454 iops : min= 2080, max= 2288, avg=2184.89, stdev=66.99, samples=9 00:26:28.454 lat (msec) : 4=98.75%, 10=1.25% 00:26:28.454 cpu : usr=94.32%, sys=4.54%, ctx=15, majf=0, minf=0 00:26:28.454 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:28.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.454 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.454 issued rwts: total=10960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.454 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:28.454 filename1: (groupid=0, jobs=1): err= 0: pid=102702: Mon Dec 16 10:13:26 2024 00:26:28.454 read: IOPS=2224, BW=17.4MiB/s (18.2MB/s)(86.9MiB/5001msec) 00:26:28.454 slat (nsec): min=6637, max=60338, avg=8542.12, stdev=3373.49 00:26:28.454 clat (usec): min=572, max=4433, avg=3557.88, stdev=316.41 00:26:28.454 lat (usec): min=578, max=4443, avg=3566.43, stdev=316.52 00:26:28.454 clat percentiles (usec): 00:26:28.454 | 1.00th=[ 1844], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3458], 00:26:28.454 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3621], 00:26:28.454 | 70.00th=[ 3687], 80.00th=[ 3720], 90.00th=[ 3818], 95.00th=[ 3884], 00:26:28.454 | 99.00th=[ 4015], 99.50th=[ 4113], 99.90th=[ 4228], 99.95th=[ 4293], 00:26:28.454 | 99.99th=[ 4359] 00:26:28.454 bw ( KiB/s): min=17088, max=18384, per=25.25%, avg=17770.67, stdev=504.19, samples=9 00:26:28.454 iops : min= 2136, max= 2298, avg=2221.33, stdev=63.02, samples=9 00:26:28.454 lat (usec) : 750=0.13%, 1000=0.08% 00:26:28.454 lat (msec) : 2=1.12%, 4=97.61%, 10=1.05% 00:26:28.454 cpu : usr=94.10%, sys=4.66%, ctx=88, majf=0, minf=0 00:26:28.454 IO depths : 1=7.9%, 2=19.6%, 4=54.7%, 8=17.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:28.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.454 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.454 issued rwts: total=11124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.455 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:28.455 00:26:28.455 Run status group 0 (all jobs): 00:26:28.455 READ: bw=68.7MiB/s (72.1MB/s), 17.1MiB/s-17.4MiB/s (18.0MB/s-18.2MB/s), io=344MiB (361MB), run=5001-5003msec 00:26:28.713 10:13:27 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:28.713 10:13:27 -- target/dif.sh@43 -- # local sub 00:26:28.713 10:13:27 -- target/dif.sh@45 -- # for sub in "$@" 00:26:28.713 10:13:27 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:28.713 10:13:27 -- target/dif.sh@36 -- # local sub_id=0 00:26:28.713 10:13:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:28.713 10:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.713 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:26:28.713 10:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.713 10:13:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:28.713 10:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.713 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:26:28.713 10:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.713 10:13:27 -- target/dif.sh@45 -- # for sub in "$@" 00:26:28.713 10:13:27 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:28.713 10:13:27 -- target/dif.sh@36 -- # local sub_id=1 00:26:28.713 10:13:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:28.713 10:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.713 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:26:28.713 10:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.713 10:13:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:28.713 10:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.713 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:26:28.713 ************************************ 00:26:28.713 END TEST fio_dif_rand_params 00:26:28.713 ************************************ 00:26:28.713 10:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.713 00:26:28.713 real 0m23.569s 00:26:28.713 user 2m7.240s 00:26:28.713 sys 0m3.940s 00:26:28.713 10:13:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:28.713 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:26:28.713 10:13:27 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:28.713 10:13:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:28.713 10:13:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:28.713 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:26:28.713 ************************************ 00:26:28.713 START TEST fio_dif_digest 00:26:28.713 ************************************ 00:26:28.713 10:13:27 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:26:28.713 10:13:27 -- target/dif.sh@123 -- # local NULL_DIF 00:26:28.713 10:13:27 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:28.713 10:13:27 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:28.713 10:13:27 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:28.713 10:13:27 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:28.713 10:13:27 -- target/dif.sh@127 -- # numjobs=3 00:26:28.713 10:13:27 -- target/dif.sh@127 -- # iodepth=3 00:26:28.713 10:13:27 -- target/dif.sh@127 -- # runtime=10 00:26:28.713 10:13:27 -- target/dif.sh@128 -- # hdgst=true 00:26:28.713 10:13:27 -- target/dif.sh@128 -- # ddgst=true 00:26:28.713 10:13:27 -- target/dif.sh@130 -- # create_subsystems 0 00:26:28.713 10:13:27 -- target/dif.sh@28 -- # local sub 00:26:28.713 10:13:27 -- target/dif.sh@30 -- # for sub in "$@" 00:26:28.713 10:13:27 -- target/dif.sh@31 -- # create_subsystem 0 00:26:28.713 10:13:27 -- target/dif.sh@18 -- # local sub_id=0 00:26:28.713 10:13:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:28.713 10:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.713 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:26:28.713 bdev_null0 00:26:28.713 10:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.713 10:13:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:28.713 10:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.713 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:26:28.713 10:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.713 10:13:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:28.713 10:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.713 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:26:28.713 10:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.713 10:13:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:28.713 10:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.713 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:26:28.728 [2024-12-16 10:13:27.208913] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.728 10:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.728 10:13:27 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:28.728 10:13:27 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:28.728 10:13:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:28.728 10:13:27 -- nvmf/common.sh@520 -- # config=() 00:26:28.728 10:13:27 -- nvmf/common.sh@520 -- # local subsystem config 00:26:28.728 10:13:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:28.728 10:13:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:28.728 { 00:26:28.728 "params": { 00:26:28.728 "name": "Nvme$subsystem", 00:26:28.728 "trtype": "$TEST_TRANSPORT", 00:26:28.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.728 "adrfam": "ipv4", 00:26:28.728 "trsvcid": "$NVMF_PORT", 00:26:28.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.728 "hdgst": ${hdgst:-false}, 00:26:28.728 "ddgst": ${ddgst:-false} 00:26:28.728 }, 00:26:28.728 "method": "bdev_nvme_attach_controller" 00:26:28.728 } 00:26:28.728 EOF 00:26:28.728 )") 00:26:28.728 10:13:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:28.728 10:13:27 -- target/dif.sh@82 -- # gen_fio_conf 00:26:28.728 10:13:27 -- target/dif.sh@54 -- # local file 00:26:28.728 10:13:27 -- target/dif.sh@56 -- # cat 00:26:28.728 10:13:27 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:28.729 10:13:27 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:28.729 10:13:27 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:28.729 10:13:27 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:28.729 10:13:27 -- nvmf/common.sh@542 -- # cat 00:26:28.729 10:13:27 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:28.729 10:13:27 -- common/autotest_common.sh@1330 -- # shift 00:26:28.729 10:13:27 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:28.729 10:13:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:28.729 10:13:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:28.729 10:13:27 -- target/dif.sh@72 -- # (( file <= files )) 00:26:28.729 10:13:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:28.729 10:13:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:28.729 10:13:27 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:28.729 10:13:27 -- nvmf/common.sh@544 -- # jq . 00:26:28.729 10:13:27 -- nvmf/common.sh@545 -- # IFS=, 00:26:28.729 10:13:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:28.729 "params": { 00:26:28.729 "name": "Nvme0", 00:26:28.729 "trtype": "tcp", 00:26:28.729 "traddr": "10.0.0.2", 00:26:28.729 "adrfam": "ipv4", 00:26:28.729 "trsvcid": "4420", 00:26:28.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:28.729 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:28.729 "hdgst": true, 00:26:28.729 "ddgst": true 00:26:28.729 }, 00:26:28.729 "method": "bdev_nvme_attach_controller" 00:26:28.729 }' 00:26:28.729 10:13:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:28.729 10:13:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:28.729 10:13:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:28.729 10:13:27 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:28.729 10:13:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:28.729 10:13:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:28.729 10:13:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:28.729 10:13:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:28.729 10:13:27 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:28.729 10:13:27 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:28.987 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:28.987 ... 00:26:28.987 fio-3.35 00:26:28.987 Starting 3 threads 00:26:29.250 [2024-12-16 10:13:27.845018] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:29.250 [2024-12-16 10:13:27.845111] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:41.463 00:26:41.463 filename0: (groupid=0, jobs=1): err= 0: pid=102808: Mon Dec 16 10:13:38 2024 00:26:41.463 read: IOPS=270, BW=33.8MiB/s (35.4MB/s)(339MiB/10048msec) 00:26:41.463 slat (nsec): min=6906, max=43727, avg=11466.59, stdev=3204.93 00:26:41.463 clat (usec): min=8627, max=51904, avg=11077.66, stdev=1861.57 00:26:41.463 lat (usec): min=8638, max=51915, avg=11089.13, stdev=1861.67 00:26:41.463 clat percentiles (usec): 00:26:41.463 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10421], 00:26:41.463 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:26:41.463 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12256], 00:26:41.463 | 99.00th=[13042], 99.50th=[13698], 99.90th=[51643], 99.95th=[51643], 00:26:41.464 | 99.99th=[52167] 00:26:41.464 bw ( KiB/s): min=32256, max=36096, per=38.71%, avg=34710.15, stdev=1113.08, samples=20 00:26:41.464 iops : min= 252, max= 282, avg=271.15, stdev= 8.70, samples=20 00:26:41.464 lat (msec) : 10=6.74%, 20=93.07%, 50=0.04%, 100=0.15% 00:26:41.464 cpu : usr=92.61%, sys=6.03%, ctx=8, majf=0, minf=9 00:26:41.464 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:41.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.464 issued rwts: total=2714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.464 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:41.464 filename0: (groupid=0, jobs=1): err= 0: pid=102809: Mon Dec 16 10:13:38 2024 00:26:41.464 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(303MiB/10006msec) 00:26:41.464 slat (nsec): min=6716, max=46665, avg=10714.31, stdev=3675.64 00:26:41.464 clat (usec): min=6555, max=15987, avg=12385.55, stdev=967.51 00:26:41.464 lat (usec): min=6580, max=15999, avg=12396.26, stdev=967.63 00:26:41.464 clat percentiles (usec): 00:26:41.464 | 1.00th=[10028], 5.00th=[10945], 10.00th=[11338], 20.00th=[11731], 00:26:41.464 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:26:41.464 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13566], 95.00th=[13960], 00:26:41.464 | 99.00th=[14877], 99.50th=[15008], 99.90th=[15533], 99.95th=[15664], 00:26:41.464 | 99.99th=[15926] 00:26:41.464 bw ( KiB/s): min=29440, max=33024, per=34.50%, avg=30935.79, stdev=924.05, samples=19 00:26:41.464 iops : min= 230, max= 258, avg=241.63, stdev= 7.22, samples=19 00:26:41.464 lat (msec) : 10=0.95%, 20=99.05% 00:26:41.464 cpu : usr=93.69%, sys=5.07%, ctx=15, majf=0, minf=9 00:26:41.464 IO depths : 1=5.7%, 2=94.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:41.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.464 issued rwts: total=2420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.464 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:41.464 filename0: (groupid=0, jobs=1): err= 0: pid=102810: Mon Dec 16 10:13:38 2024 00:26:41.464 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(238MiB/10046msec) 00:26:41.464 slat (nsec): min=6885, max=43069, avg=11742.46, stdev=3641.95 00:26:41.464 clat (usec): min=9297, max=51264, avg=15789.20, stdev=1488.41 00:26:41.464 lat (usec): min=9307, max=51275, avg=15800.95, stdev=1488.58 00:26:41.464 clat percentiles (usec): 00:26:41.464 | 1.00th=[13960], 5.00th=[14484], 10.00th=[14746], 20.00th=[15008], 00:26:41.464 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15664], 60.00th=[15926], 00:26:41.464 | 70.00th=[16188], 80.00th=[16450], 90.00th=[16909], 95.00th=[17433], 00:26:41.464 | 99.00th=[18482], 99.50th=[19006], 99.90th=[49021], 99.95th=[51119], 00:26:41.464 | 99.99th=[51119] 00:26:41.464 bw ( KiB/s): min=22784, max=25600, per=27.16%, avg=24350.35, stdev=762.77, samples=20 00:26:41.464 iops : min= 178, max= 200, avg=190.20, stdev= 5.98, samples=20 00:26:41.464 lat (msec) : 10=0.32%, 20=99.53%, 50=0.11%, 100=0.05% 00:26:41.464 cpu : usr=93.00%, sys=5.81%, ctx=14, majf=0, minf=9 00:26:41.464 IO depths : 1=3.6%, 2=96.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:41.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.464 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.464 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:41.464 00:26:41.464 Run status group 0 (all jobs): 00:26:41.464 READ: bw=87.6MiB/s (91.8MB/s), 23.7MiB/s-33.8MiB/s (24.8MB/s-35.4MB/s), io=880MiB (922MB), run=10006-10048msec 00:26:41.464 10:13:38 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:41.464 10:13:38 -- target/dif.sh@43 -- # local sub 00:26:41.464 10:13:38 -- target/dif.sh@45 -- # for sub in "$@" 00:26:41.464 10:13:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:41.464 10:13:38 -- target/dif.sh@36 -- # local sub_id=0 00:26:41.464 10:13:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:41.464 10:13:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.464 10:13:38 -- common/autotest_common.sh@10 -- # set +x 00:26:41.464 10:13:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.464 10:13:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:41.464 10:13:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.464 10:13:38 -- common/autotest_common.sh@10 -- # set +x 00:26:41.464 ************************************ 00:26:41.464 END TEST fio_dif_digest 00:26:41.464 ************************************ 00:26:41.464 10:13:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.464 00:26:41.464 real 0m11.107s 00:26:41.464 user 0m28.720s 00:26:41.464 sys 0m1.998s 00:26:41.464 10:13:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:41.464 10:13:38 -- common/autotest_common.sh@10 -- # set +x 00:26:41.464 10:13:38 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:41.464 10:13:38 -- target/dif.sh@147 -- # nvmftestfini 00:26:41.464 10:13:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:41.464 10:13:38 -- nvmf/common.sh@116 -- # sync 00:26:41.464 10:13:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:41.464 10:13:38 -- nvmf/common.sh@119 -- # set +e 00:26:41.464 10:13:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:41.464 10:13:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:41.464 rmmod nvme_tcp 00:26:41.464 rmmod nvme_fabrics 00:26:41.464 rmmod nvme_keyring 00:26:41.464 10:13:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:41.464 10:13:38 -- nvmf/common.sh@123 -- # set -e 00:26:41.464 10:13:38 -- nvmf/common.sh@124 -- # return 0 00:26:41.464 10:13:38 -- nvmf/common.sh@477 -- # '[' -n 102037 ']' 00:26:41.464 10:13:38 -- nvmf/common.sh@478 -- # killprocess 102037 00:26:41.464 10:13:38 -- common/autotest_common.sh@936 -- # '[' -z 102037 ']' 00:26:41.464 10:13:38 -- common/autotest_common.sh@940 -- # kill -0 102037 00:26:41.464 10:13:38 -- common/autotest_common.sh@941 -- # uname 00:26:41.464 10:13:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:41.464 10:13:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102037 00:26:41.464 killing process with pid 102037 00:26:41.464 10:13:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:41.464 10:13:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:41.464 10:13:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 102037' 00:26:41.464 10:13:38 -- common/autotest_common.sh@955 -- # kill 102037 00:26:41.464 10:13:38 -- common/autotest_common.sh@960 -- # wait 102037 00:26:41.464 10:13:38 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:41.464 10:13:38 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:41.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:41.464 Waiting for block devices as requested 00:26:41.464 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:41.464 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:41.464 10:13:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:41.464 10:13:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:41.464 10:13:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:41.464 10:13:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:41.464 10:13:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.464 10:13:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:41.464 10:13:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.464 10:13:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:41.464 ************************************ 00:26:41.464 END TEST nvmf_dif 00:26:41.464 ************************************ 00:26:41.464 00:26:41.464 real 1m0.275s 00:26:41.464 user 3m52.431s 00:26:41.464 sys 0m14.295s 00:26:41.464 10:13:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:41.464 10:13:39 -- common/autotest_common.sh@10 -- # set +x 00:26:41.464 10:13:39 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:41.464 10:13:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:41.464 10:13:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:41.464 10:13:39 -- common/autotest_common.sh@10 -- # set +x 00:26:41.464 ************************************ 00:26:41.464 START TEST nvmf_abort_qd_sizes 00:26:41.464 ************************************ 00:26:41.464 10:13:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:41.464 * Looking for test storage... 00:26:41.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:41.464 10:13:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:41.464 10:13:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:41.464 10:13:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:41.464 10:13:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:41.465 10:13:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:41.465 10:13:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:41.465 10:13:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:41.465 10:13:39 -- scripts/common.sh@335 -- # IFS=.-: 00:26:41.465 10:13:39 -- scripts/common.sh@335 -- # read -ra ver1 00:26:41.465 10:13:39 -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.465 10:13:39 -- scripts/common.sh@336 -- # read -ra ver2 00:26:41.465 10:13:39 -- scripts/common.sh@337 -- # local 'op=<' 00:26:41.465 10:13:39 -- scripts/common.sh@339 -- # ver1_l=2 00:26:41.465 10:13:39 -- scripts/common.sh@340 -- # ver2_l=1 00:26:41.465 10:13:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:41.465 10:13:39 -- scripts/common.sh@343 -- # case "$op" in 00:26:41.465 10:13:39 -- scripts/common.sh@344 -- # : 1 00:26:41.465 10:13:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:41.465 10:13:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.465 10:13:39 -- scripts/common.sh@364 -- # decimal 1 00:26:41.465 10:13:39 -- scripts/common.sh@352 -- # local d=1 00:26:41.465 10:13:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.465 10:13:39 -- scripts/common.sh@354 -- # echo 1 00:26:41.465 10:13:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:41.465 10:13:39 -- scripts/common.sh@365 -- # decimal 2 00:26:41.465 10:13:39 -- scripts/common.sh@352 -- # local d=2 00:26:41.465 10:13:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.465 10:13:39 -- scripts/common.sh@354 -- # echo 2 00:26:41.465 10:13:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:41.465 10:13:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:41.465 10:13:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:41.465 10:13:39 -- scripts/common.sh@367 -- # return 0 00:26:41.465 10:13:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.465 10:13:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:41.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.465 --rc genhtml_branch_coverage=1 00:26:41.465 --rc genhtml_function_coverage=1 00:26:41.465 --rc genhtml_legend=1 00:26:41.465 --rc geninfo_all_blocks=1 00:26:41.465 --rc geninfo_unexecuted_blocks=1 00:26:41.465 00:26:41.465 ' 00:26:41.465 10:13:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:41.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.465 --rc genhtml_branch_coverage=1 00:26:41.465 --rc genhtml_function_coverage=1 00:26:41.465 --rc genhtml_legend=1 00:26:41.465 --rc geninfo_all_blocks=1 00:26:41.465 --rc geninfo_unexecuted_blocks=1 00:26:41.465 00:26:41.465 ' 00:26:41.465 10:13:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:41.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.465 --rc genhtml_branch_coverage=1 00:26:41.465 --rc genhtml_function_coverage=1 00:26:41.465 --rc genhtml_legend=1 00:26:41.465 --rc geninfo_all_blocks=1 00:26:41.465 --rc geninfo_unexecuted_blocks=1 00:26:41.465 00:26:41.465 ' 00:26:41.465 10:13:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:41.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.465 --rc genhtml_branch_coverage=1 00:26:41.465 --rc genhtml_function_coverage=1 00:26:41.465 --rc genhtml_legend=1 00:26:41.465 --rc geninfo_all_blocks=1 00:26:41.465 --rc geninfo_unexecuted_blocks=1 00:26:41.465 00:26:41.465 ' 00:26:41.465 10:13:39 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:41.465 10:13:39 -- nvmf/common.sh@7 -- # uname -s 00:26:41.465 10:13:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.465 10:13:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.465 10:13:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.465 10:13:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.465 10:13:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.465 10:13:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.465 10:13:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.465 10:13:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.465 10:13:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.465 10:13:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.465 10:13:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:26:41.465 10:13:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed 00:26:41.465 10:13:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.465 10:13:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.465 10:13:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:41.465 10:13:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:41.465 10:13:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.465 10:13:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.465 10:13:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.465 10:13:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.465 10:13:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.465 10:13:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.465 10:13:39 -- paths/export.sh@5 -- # export PATH 00:26:41.465 10:13:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.465 10:13:39 -- nvmf/common.sh@46 -- # : 0 00:26:41.465 10:13:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:41.465 10:13:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:41.465 10:13:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:41.465 10:13:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.465 10:13:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.465 10:13:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:41.465 10:13:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:41.465 10:13:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:41.465 10:13:39 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:41.465 10:13:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:41.465 10:13:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.465 10:13:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:41.465 10:13:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:41.465 10:13:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:41.465 10:13:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.465 10:13:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:41.465 10:13:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.465 10:13:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:41.465 10:13:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:41.465 10:13:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:41.465 10:13:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:41.465 10:13:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:41.465 10:13:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:41.465 10:13:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.465 10:13:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.465 10:13:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:41.465 10:13:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:41.465 10:13:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:41.465 10:13:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:41.465 10:13:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:41.465 10:13:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.465 10:13:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:41.465 10:13:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:41.465 10:13:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:41.465 10:13:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:41.465 10:13:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:41.465 10:13:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:41.465 Cannot find device "nvmf_tgt_br" 00:26:41.465 10:13:39 -- nvmf/common.sh@154 -- # true 00:26:41.465 10:13:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:41.465 Cannot find device "nvmf_tgt_br2" 00:26:41.465 10:13:39 -- nvmf/common.sh@155 -- # true 00:26:41.465 10:13:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:41.465 10:13:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:41.466 Cannot find device "nvmf_tgt_br" 00:26:41.466 10:13:39 -- nvmf/common.sh@157 -- # true 00:26:41.466 10:13:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:41.466 Cannot find device "nvmf_tgt_br2" 00:26:41.466 10:13:39 -- nvmf/common.sh@158 -- # true 00:26:41.466 10:13:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:41.466 10:13:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:41.466 10:13:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:41.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.466 10:13:39 -- nvmf/common.sh@161 -- # true 00:26:41.466 10:13:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:41.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.466 10:13:39 -- nvmf/common.sh@162 -- # true 00:26:41.466 10:13:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:41.466 10:13:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:41.466 10:13:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:41.466 10:13:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:41.466 10:13:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:41.466 10:13:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:41.466 10:13:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:41.466 10:13:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:41.466 10:13:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:41.466 10:13:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:41.466 10:13:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:41.466 10:13:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:41.466 10:13:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:41.466 10:13:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:41.466 10:13:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:41.466 10:13:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:41.466 10:13:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:41.466 10:13:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:41.466 10:13:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:41.466 10:13:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:41.466 10:13:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:41.466 10:13:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:41.466 10:13:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:41.466 10:13:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:41.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:26:41.466 00:26:41.466 --- 10.0.0.2 ping statistics --- 00:26:41.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.466 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:26:41.466 10:13:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:41.466 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:41.466 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:26:41.466 00:26:41.466 --- 10.0.0.3 ping statistics --- 00:26:41.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.466 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:26:41.466 10:13:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:41.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:26:41.466 00:26:41.466 --- 10.0.0.1 ping statistics --- 00:26:41.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.466 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:26:41.466 10:13:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.466 10:13:39 -- nvmf/common.sh@421 -- # return 0 00:26:41.466 10:13:39 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:41.466 10:13:39 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:42.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:42.084 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:42.084 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:42.357 10:13:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.357 10:13:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:42.357 10:13:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:42.357 10:13:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.357 10:13:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:42.357 10:13:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:42.357 10:13:40 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:42.357 10:13:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:42.357 10:13:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:42.357 10:13:40 -- common/autotest_common.sh@10 -- # set +x 00:26:42.357 10:13:40 -- nvmf/common.sh@469 -- # nvmfpid=103412 00:26:42.357 10:13:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:42.357 10:13:40 -- nvmf/common.sh@470 -- # waitforlisten 103412 00:26:42.357 10:13:40 -- common/autotest_common.sh@829 -- # '[' -z 103412 ']' 00:26:42.357 10:13:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.357 10:13:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:42.357 10:13:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.357 10:13:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:42.357 10:13:40 -- common/autotest_common.sh@10 -- # set +x 00:26:42.357 [2024-12-16 10:13:40.803234] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:42.357 [2024-12-16 10:13:40.803313] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.357 [2024-12-16 10:13:40.946042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:42.628 [2024-12-16 10:13:41.016797] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:42.628 [2024-12-16 10:13:41.016971] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.628 [2024-12-16 10:13:41.016987] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.628 [2024-12-16 10:13:41.016999] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.628 [2024-12-16 10:13:41.017081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.628 [2024-12-16 10:13:41.017212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.628 [2024-12-16 10:13:41.017380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.628 [2024-12-16 10:13:41.017381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:43.216 10:13:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:43.216 10:13:41 -- common/autotest_common.sh@862 -- # return 0 00:26:43.216 10:13:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:43.216 10:13:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:43.216 10:13:41 -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 10:13:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.474 10:13:41 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:43.474 10:13:41 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:43.474 10:13:41 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:43.474 10:13:41 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:43.474 10:13:41 -- scripts/common.sh@312 -- # local nvmes 00:26:43.474 10:13:41 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:43.474 10:13:41 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:43.474 10:13:41 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:43.474 10:13:41 -- scripts/common.sh@297 -- # local bdf= 00:26:43.474 10:13:41 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:43.474 10:13:41 -- scripts/common.sh@232 -- # local class 00:26:43.474 10:13:41 -- scripts/common.sh@233 -- # local subclass 00:26:43.474 10:13:41 -- scripts/common.sh@234 -- # local progif 00:26:43.474 10:13:41 -- scripts/common.sh@235 -- # printf %02x 1 00:26:43.474 10:13:41 -- scripts/common.sh@235 -- # class=01 00:26:43.474 10:13:41 -- scripts/common.sh@236 -- # printf %02x 8 00:26:43.474 10:13:41 -- scripts/common.sh@236 -- # subclass=08 00:26:43.474 10:13:41 -- scripts/common.sh@237 -- # printf %02x 2 00:26:43.474 10:13:41 -- scripts/common.sh@237 -- # progif=02 00:26:43.474 10:13:41 -- scripts/common.sh@239 -- # hash lspci 00:26:43.474 10:13:41 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:43.474 10:13:41 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:43.474 10:13:41 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:43.474 10:13:41 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:43.474 10:13:41 -- scripts/common.sh@244 -- # tr -d '"' 00:26:43.474 10:13:41 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:43.474 10:13:41 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:43.474 10:13:41 -- scripts/common.sh@15 -- # local i 00:26:43.474 10:13:41 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:43.474 10:13:41 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:43.474 10:13:41 -- scripts/common.sh@24 -- # return 0 00:26:43.474 10:13:41 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:43.474 10:13:41 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:43.474 10:13:41 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:43.474 10:13:41 -- scripts/common.sh@15 -- # local i 00:26:43.474 10:13:41 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:43.474 10:13:41 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:43.474 10:13:41 -- scripts/common.sh@24 -- # return 0 00:26:43.474 10:13:41 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:43.474 10:13:41 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:43.474 10:13:41 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:43.474 10:13:41 -- scripts/common.sh@322 -- # uname -s 00:26:43.474 10:13:41 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:43.474 10:13:41 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:43.474 10:13:41 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:43.474 10:13:41 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:43.474 10:13:41 -- scripts/common.sh@322 -- # uname -s 00:26:43.474 10:13:41 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:43.474 10:13:41 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:43.474 10:13:41 -- scripts/common.sh@327 -- # (( 2 )) 00:26:43.474 10:13:41 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:43.474 10:13:41 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:43.474 10:13:41 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:43.474 10:13:41 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:43.474 10:13:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:43.474 10:13:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:43.474 10:13:41 -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 ************************************ 00:26:43.474 START TEST spdk_target_abort 00:26:43.474 ************************************ 00:26:43.474 10:13:41 -- common/autotest_common.sh@1114 -- # spdk_target 00:26:43.474 10:13:41 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:43.474 10:13:41 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:43.474 10:13:41 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:43.474 10:13:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 10:13:41 -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 spdk_targetn1 00:26:43.474 10:13:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 10:13:41 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.474 10:13:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.474 10:13:41 -- common/autotest_common.sh@10 -- # set +x 00:26:43.474 [2024-12-16 10:13:41.994103] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.474 10:13:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.474 10:13:42 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:43.474 10:13:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.475 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:26:43.475 10:13:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:43.475 10:13:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.475 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:26:43.475 10:13:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:43.475 10:13:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.475 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:26:43.475 [2024-12-16 10:13:42.026296] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.475 10:13:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:43.475 10:13:42 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:46.755 Initializing NVMe Controllers 00:26:46.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:46.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:46.755 Initialization complete. Launching workers. 00:26:46.755 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10741, failed: 0 00:26:46.755 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1049, failed to submit 9692 00:26:46.755 success 784, unsuccess 265, failed 0 00:26:46.755 10:13:45 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:46.755 10:13:45 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:50.040 [2024-12-16 10:13:48.495400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b5c0 is same with the state(5) to be set 00:26:50.040 [2024-12-16 10:13:48.495439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b5c0 is same with the state(5) to be set 00:26:50.040 [2024-12-16 10:13:48.495449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b5c0 is same with the state(5) to be set 00:26:50.040 [2024-12-16 10:13:48.495457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b5c0 is same with the state(5) to be set 00:26:50.040 [2024-12-16 10:13:48.495464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b5c0 is same with the state(5) to be set 00:26:50.040 [2024-12-16 10:13:48.495472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b5c0 is same with the state(5) to be set 00:26:50.040 [2024-12-16 10:13:48.495479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b5c0 is same with the state(5) to be set 00:26:50.040 [2024-12-16 10:13:48.495486] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232b5c0 is same with the state(5) to be set 00:26:50.040 Initializing NVMe Controllers 00:26:50.040 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:50.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:50.040 Initialization complete. Launching workers. 00:26:50.040 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 6023, failed: 0 00:26:50.040 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1239, failed to submit 4784 00:26:50.040 success 265, unsuccess 974, failed 0 00:26:50.040 10:13:48 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:50.040 10:13:48 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:53.325 Initializing NVMe Controllers 00:26:53.325 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:53.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:53.325 Initialization complete. Launching workers. 00:26:53.325 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31341, failed: 0 00:26:53.325 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2537, failed to submit 28804 00:26:53.325 success 496, unsuccess 2041, failed 0 00:26:53.325 10:13:51 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:53.325 10:13:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.325 10:13:51 -- common/autotest_common.sh@10 -- # set +x 00:26:53.325 10:13:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.325 10:13:51 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:53.325 10:13:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.325 10:13:51 -- common/autotest_common.sh@10 -- # set +x 00:26:53.893 10:13:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.893 10:13:52 -- target/abort_qd_sizes.sh@62 -- # killprocess 103412 00:26:53.893 10:13:52 -- common/autotest_common.sh@936 -- # '[' -z 103412 ']' 00:26:53.893 10:13:52 -- common/autotest_common.sh@940 -- # kill -0 103412 00:26:53.893 10:13:52 -- common/autotest_common.sh@941 -- # uname 00:26:53.893 10:13:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:53.893 10:13:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103412 00:26:53.893 killing process with pid 103412 00:26:53.893 10:13:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:53.893 10:13:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:53.893 10:13:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103412' 00:26:53.893 10:13:52 -- common/autotest_common.sh@955 -- # kill 103412 00:26:53.893 10:13:52 -- common/autotest_common.sh@960 -- # wait 103412 00:26:54.152 00:26:54.152 real 0m10.674s 00:26:54.152 user 0m44.043s 00:26:54.152 sys 0m1.612s 00:26:54.152 10:13:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:54.152 ************************************ 00:26:54.152 END TEST spdk_target_abort 00:26:54.152 ************************************ 00:26:54.152 10:13:52 -- common/autotest_common.sh@10 -- # set +x 00:26:54.152 10:13:52 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:54.152 10:13:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:54.152 10:13:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:54.152 10:13:52 -- common/autotest_common.sh@10 -- # set +x 00:26:54.152 ************************************ 00:26:54.152 START TEST kernel_target_abort 00:26:54.152 ************************************ 00:26:54.152 10:13:52 -- common/autotest_common.sh@1114 -- # kernel_target 00:26:54.152 10:13:52 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:54.152 10:13:52 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:54.152 10:13:52 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:54.152 10:13:52 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:54.152 10:13:52 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:54.152 10:13:52 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:54.152 10:13:52 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:54.152 10:13:52 -- nvmf/common.sh@627 -- # local block nvme 00:26:54.152 10:13:52 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:54.152 10:13:52 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:54.152 10:13:52 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:54.152 10:13:52 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:54.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:54.410 Waiting for block devices as requested 00:26:54.669 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:54.669 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:54.669 10:13:53 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:54.669 10:13:53 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:54.669 10:13:53 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:54.669 10:13:53 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:54.669 10:13:53 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:54.669 No valid GPT data, bailing 00:26:54.669 10:13:53 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:54.669 10:13:53 -- scripts/common.sh@393 -- # pt= 00:26:54.669 10:13:53 -- scripts/common.sh@394 -- # return 1 00:26:54.669 10:13:53 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:54.669 10:13:53 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:54.669 10:13:53 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:54.669 10:13:53 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:54.669 10:13:53 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:54.669 10:13:53 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:54.928 No valid GPT data, bailing 00:26:54.928 10:13:53 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:54.928 10:13:53 -- scripts/common.sh@393 -- # pt= 00:26:54.928 10:13:53 -- scripts/common.sh@394 -- # return 1 00:26:54.928 10:13:53 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:54.928 10:13:53 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:54.928 10:13:53 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:54.928 10:13:53 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:54.928 10:13:53 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:54.928 10:13:53 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:54.928 No valid GPT data, bailing 00:26:54.928 10:13:53 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:54.929 10:13:53 -- scripts/common.sh@393 -- # pt= 00:26:54.929 10:13:53 -- scripts/common.sh@394 -- # return 1 00:26:54.929 10:13:53 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:54.929 10:13:53 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:54.929 10:13:53 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:54.929 10:13:53 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:54.929 10:13:53 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:54.929 10:13:53 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:54.929 No valid GPT data, bailing 00:26:54.929 10:13:53 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:54.929 10:13:53 -- scripts/common.sh@393 -- # pt= 00:26:54.929 10:13:53 -- scripts/common.sh@394 -- # return 1 00:26:54.929 10:13:53 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:54.929 10:13:53 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:54.929 10:13:53 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:54.929 10:13:53 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:54.929 10:13:53 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:54.929 10:13:53 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:54.929 10:13:53 -- nvmf/common.sh@654 -- # echo 1 00:26:54.929 10:13:53 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:54.929 10:13:53 -- nvmf/common.sh@656 -- # echo 1 00:26:54.929 10:13:53 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:54.929 10:13:53 -- nvmf/common.sh@663 -- # echo tcp 00:26:54.929 10:13:53 -- nvmf/common.sh@664 -- # echo 4420 00:26:54.929 10:13:53 -- nvmf/common.sh@665 -- # echo ipv4 00:26:54.929 10:13:53 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:54.929 10:13:53 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed --hostid=f345a6e6-c3a4-4d00-b880-b00cbe6bd6ed -a 10.0.0.1 -t tcp -s 4420 00:26:55.188 00:26:55.188 Discovery Log Number of Records 2, Generation counter 2 00:26:55.188 =====Discovery Log Entry 0====== 00:26:55.188 trtype: tcp 00:26:55.188 adrfam: ipv4 00:26:55.188 subtype: current discovery subsystem 00:26:55.188 treq: not specified, sq flow control disable supported 00:26:55.188 portid: 1 00:26:55.188 trsvcid: 4420 00:26:55.188 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:55.188 traddr: 10.0.0.1 00:26:55.188 eflags: none 00:26:55.188 sectype: none 00:26:55.188 =====Discovery Log Entry 1====== 00:26:55.188 trtype: tcp 00:26:55.188 adrfam: ipv4 00:26:55.188 subtype: nvme subsystem 00:26:55.188 treq: not specified, sq flow control disable supported 00:26:55.188 portid: 1 00:26:55.188 trsvcid: 4420 00:26:55.188 subnqn: kernel_target 00:26:55.188 traddr: 10.0.0.1 00:26:55.188 eflags: none 00:26:55.188 sectype: none 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:55.188 10:13:53 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:58.477 Initializing NVMe Controllers 00:26:58.477 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:58.477 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:58.478 Initialization complete. Launching workers. 00:26:58.478 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31842, failed: 0 00:26:58.478 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31842, failed to submit 0 00:26:58.478 success 0, unsuccess 31842, failed 0 00:26:58.478 10:13:56 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:58.478 10:13:56 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:01.768 Initializing NVMe Controllers 00:27:01.768 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:01.768 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:01.768 Initialization complete. Launching workers. 00:27:01.768 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 65981, failed: 0 00:27:01.768 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27701, failed to submit 38280 00:27:01.768 success 0, unsuccess 27701, failed 0 00:27:01.768 10:13:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:01.768 10:13:59 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:05.057 Initializing NVMe Controllers 00:27:05.057 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:05.057 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:05.057 Initialization complete. Launching workers. 00:27:05.057 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 73109, failed: 0 00:27:05.057 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18240, failed to submit 54869 00:27:05.057 success 0, unsuccess 18240, failed 0 00:27:05.057 10:14:03 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:27:05.057 10:14:03 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:27:05.058 10:14:03 -- nvmf/common.sh@677 -- # echo 0 00:27:05.058 10:14:03 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:27:05.058 10:14:03 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:05.058 10:14:03 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:05.058 10:14:03 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:05.058 10:14:03 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:27:05.058 10:14:03 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:27:05.058 ************************************ 00:27:05.058 END TEST kernel_target_abort 00:27:05.058 ************************************ 00:27:05.058 00:27:05.058 real 0m10.491s 00:27:05.058 user 0m5.305s 00:27:05.058 sys 0m2.605s 00:27:05.058 10:14:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:05.058 10:14:03 -- common/autotest_common.sh@10 -- # set +x 00:27:05.058 10:14:03 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:27:05.058 10:14:03 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:27:05.058 10:14:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:05.058 10:14:03 -- nvmf/common.sh@116 -- # sync 00:27:05.058 10:14:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:05.058 10:14:03 -- nvmf/common.sh@119 -- # set +e 00:27:05.058 10:14:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:05.058 10:14:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:05.058 rmmod nvme_tcp 00:27:05.058 rmmod nvme_fabrics 00:27:05.058 rmmod nvme_keyring 00:27:05.058 10:14:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:05.058 10:14:03 -- nvmf/common.sh@123 -- # set -e 00:27:05.058 10:14:03 -- nvmf/common.sh@124 -- # return 0 00:27:05.058 10:14:03 -- nvmf/common.sh@477 -- # '[' -n 103412 ']' 00:27:05.058 10:14:03 -- nvmf/common.sh@478 -- # killprocess 103412 00:27:05.058 10:14:03 -- common/autotest_common.sh@936 -- # '[' -z 103412 ']' 00:27:05.058 10:14:03 -- common/autotest_common.sh@940 -- # kill -0 103412 00:27:05.058 Process with pid 103412 is not found 00:27:05.058 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (103412) - No such process 00:27:05.058 10:14:03 -- common/autotest_common.sh@963 -- # echo 'Process with pid 103412 is not found' 00:27:05.058 10:14:03 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:05.058 10:14:03 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:05.317 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:05.576 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:05.576 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:05.576 10:14:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:05.576 10:14:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:05.576 10:14:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.576 10:14:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:05.576 10:14:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.576 10:14:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:05.576 10:14:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.576 10:14:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:05.576 00:27:05.576 real 0m24.724s 00:27:05.576 user 0m50.835s 00:27:05.576 sys 0m5.485s 00:27:05.576 10:14:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:05.576 ************************************ 00:27:05.576 END TEST nvmf_abort_qd_sizes 00:27:05.576 10:14:04 -- common/autotest_common.sh@10 -- # set +x 00:27:05.576 ************************************ 00:27:05.576 10:14:04 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:27:05.576 10:14:04 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:27:05.576 10:14:04 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:27:05.576 10:14:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:05.576 10:14:04 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:05.576 10:14:04 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:05.576 10:14:04 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:27:05.576 10:14:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:05.576 10:14:04 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:05.576 10:14:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:05.576 10:14:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:05.576 10:14:04 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:05.576 10:14:04 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:05.576 10:14:04 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:05.576 10:14:04 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:05.576 10:14:04 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:05.576 10:14:04 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:05.576 10:14:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:05.576 10:14:04 -- common/autotest_common.sh@10 -- # set +x 00:27:05.576 10:14:04 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:05.576 10:14:04 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:05.576 10:14:04 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:05.576 10:14:04 -- common/autotest_common.sh@10 -- # set +x 00:27:07.479 INFO: APP EXITING 00:27:07.479 INFO: killing all VMs 00:27:07.479 INFO: killing vhost app 00:27:07.479 INFO: EXIT DONE 00:27:08.048 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:08.048 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:08.048 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:08.616 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:08.616 Cleaning 00:27:08.616 Removing: /var/run/dpdk/spdk0/config 00:27:08.616 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:08.616 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:08.616 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:08.616 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:08.616 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:08.616 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:08.616 Removing: /var/run/dpdk/spdk1/config 00:27:08.616 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:08.616 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:08.875 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:08.875 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:08.875 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:08.875 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:08.875 Removing: /var/run/dpdk/spdk2/config 00:27:08.875 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:08.875 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:08.876 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:08.876 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:08.876 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:08.876 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:08.876 Removing: /var/run/dpdk/spdk3/config 00:27:08.876 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:08.876 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:08.876 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:08.876 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:08.876 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:08.876 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:08.876 Removing: /var/run/dpdk/spdk4/config 00:27:08.876 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:08.876 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:08.876 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:08.876 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:08.876 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:08.876 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:08.876 Removing: /dev/shm/nvmf_trace.0 00:27:08.876 Removing: /dev/shm/spdk_tgt_trace.pid67611 00:27:08.876 Removing: /var/run/dpdk/spdk0 00:27:08.876 Removing: /var/run/dpdk/spdk1 00:27:08.876 Removing: /var/run/dpdk/spdk2 00:27:08.876 Removing: /var/run/dpdk/spdk3 00:27:08.876 Removing: /var/run/dpdk/spdk4 00:27:08.876 Removing: /var/run/dpdk/spdk_pid100374 00:27:08.876 Removing: /var/run/dpdk/spdk_pid100575 00:27:08.876 Removing: /var/run/dpdk/spdk_pid100866 00:27:08.876 Removing: /var/run/dpdk/spdk_pid101172 00:27:08.876 Removing: /var/run/dpdk/spdk_pid101733 00:27:08.876 Removing: /var/run/dpdk/spdk_pid101744 00:27:08.876 Removing: /var/run/dpdk/spdk_pid102118 00:27:08.876 Removing: /var/run/dpdk/spdk_pid102274 00:27:08.876 Removing: /var/run/dpdk/spdk_pid102437 00:27:08.876 Removing: /var/run/dpdk/spdk_pid102529 00:27:08.876 Removing: /var/run/dpdk/spdk_pid102684 00:27:08.876 Removing: /var/run/dpdk/spdk_pid102793 00:27:08.876 Removing: /var/run/dpdk/spdk_pid103481 00:27:08.876 Removing: /var/run/dpdk/spdk_pid103516 00:27:08.876 Removing: /var/run/dpdk/spdk_pid103550 00:27:08.876 Removing: /var/run/dpdk/spdk_pid103797 00:27:08.876 Removing: /var/run/dpdk/spdk_pid103832 00:27:08.876 Removing: /var/run/dpdk/spdk_pid103866 00:27:08.876 Removing: /var/run/dpdk/spdk_pid67458 00:27:08.876 Removing: /var/run/dpdk/spdk_pid67611 00:27:08.876 Removing: /var/run/dpdk/spdk_pid67936 00:27:08.876 Removing: /var/run/dpdk/spdk_pid68205 00:27:08.876 Removing: /var/run/dpdk/spdk_pid68388 00:27:08.876 Removing: /var/run/dpdk/spdk_pid68470 00:27:08.876 Removing: /var/run/dpdk/spdk_pid68565 00:27:08.876 Removing: /var/run/dpdk/spdk_pid68667 00:27:08.876 Removing: /var/run/dpdk/spdk_pid68700 00:27:08.876 Removing: /var/run/dpdk/spdk_pid68741 00:27:08.876 Removing: /var/run/dpdk/spdk_pid68804 00:27:08.876 Removing: /var/run/dpdk/spdk_pid68908 00:27:08.876 Removing: /var/run/dpdk/spdk_pid69540 00:27:08.876 Removing: /var/run/dpdk/spdk_pid69604 00:27:08.876 Removing: /var/run/dpdk/spdk_pid69667 00:27:08.876 Removing: /var/run/dpdk/spdk_pid69695 00:27:08.876 Removing: /var/run/dpdk/spdk_pid69769 00:27:08.876 Removing: /var/run/dpdk/spdk_pid69797 00:27:08.876 Removing: /var/run/dpdk/spdk_pid69876 00:27:08.876 Removing: /var/run/dpdk/spdk_pid69904 00:27:08.876 Removing: /var/run/dpdk/spdk_pid69955 00:27:08.876 Removing: /var/run/dpdk/spdk_pid69985 00:27:08.876 Removing: /var/run/dpdk/spdk_pid70037 00:27:08.876 Removing: /var/run/dpdk/spdk_pid70067 00:27:08.876 Removing: /var/run/dpdk/spdk_pid70219 00:27:08.876 Removing: /var/run/dpdk/spdk_pid70258 00:27:08.876 Removing: /var/run/dpdk/spdk_pid70334 00:27:08.876 Removing: /var/run/dpdk/spdk_pid70409 00:27:08.876 Removing: /var/run/dpdk/spdk_pid70428 00:27:08.876 Removing: /var/run/dpdk/spdk_pid70492 00:27:08.876 Removing: /var/run/dpdk/spdk_pid70506 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70542 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70560 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70589 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70614 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70643 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70665 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70699 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70713 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70748 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70767 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70802 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70821 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70850 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70870 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70905 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70925 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70954 00:27:09.136 Removing: /var/run/dpdk/spdk_pid70973 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71008 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71022 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71062 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71076 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71110 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71130 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71159 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71184 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71213 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71227 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71267 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71281 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71321 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71338 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71379 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71401 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71433 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71453 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71487 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71507 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71542 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71614 00:27:09.136 Removing: /var/run/dpdk/spdk_pid71732 00:27:09.136 Removing: /var/run/dpdk/spdk_pid72170 00:27:09.136 Removing: /var/run/dpdk/spdk_pid79122 00:27:09.136 Removing: /var/run/dpdk/spdk_pid79471 00:27:09.136 Removing: /var/run/dpdk/spdk_pid81883 00:27:09.136 Removing: /var/run/dpdk/spdk_pid82267 00:27:09.136 Removing: /var/run/dpdk/spdk_pid82512 00:27:09.136 Removing: /var/run/dpdk/spdk_pid82558 00:27:09.136 Removing: /var/run/dpdk/spdk_pid82868 00:27:09.136 Removing: /var/run/dpdk/spdk_pid82918 00:27:09.136 Removing: /var/run/dpdk/spdk_pid83301 00:27:09.136 Removing: /var/run/dpdk/spdk_pid83829 00:27:09.136 Removing: /var/run/dpdk/spdk_pid84257 00:27:09.136 Removing: /var/run/dpdk/spdk_pid85203 00:27:09.136 Removing: /var/run/dpdk/spdk_pid86202 00:27:09.136 Removing: /var/run/dpdk/spdk_pid86314 00:27:09.136 Removing: /var/run/dpdk/spdk_pid86376 00:27:09.136 Removing: /var/run/dpdk/spdk_pid87859 00:27:09.136 Removing: /var/run/dpdk/spdk_pid88102 00:27:09.136 Removing: /var/run/dpdk/spdk_pid88550 00:27:09.136 Removing: /var/run/dpdk/spdk_pid88659 00:27:09.136 Removing: /var/run/dpdk/spdk_pid88808 00:27:09.136 Removing: /var/run/dpdk/spdk_pid88855 00:27:09.136 Removing: /var/run/dpdk/spdk_pid88895 00:27:09.136 Removing: /var/run/dpdk/spdk_pid88946 00:27:09.136 Removing: /var/run/dpdk/spdk_pid89104 00:27:09.136 Removing: /var/run/dpdk/spdk_pid89255 00:27:09.136 Removing: /var/run/dpdk/spdk_pid89522 00:27:09.136 Removing: /var/run/dpdk/spdk_pid89641 00:27:09.136 Removing: /var/run/dpdk/spdk_pid90061 00:27:09.136 Removing: /var/run/dpdk/spdk_pid90448 00:27:09.136 Removing: /var/run/dpdk/spdk_pid90451 00:27:09.136 Removing: /var/run/dpdk/spdk_pid92703 00:27:09.136 Removing: /var/run/dpdk/spdk_pid93021 00:27:09.136 Removing: /var/run/dpdk/spdk_pid93535 00:27:09.136 Removing: /var/run/dpdk/spdk_pid93537 00:27:09.136 Removing: /var/run/dpdk/spdk_pid93891 00:27:09.136 Removing: /var/run/dpdk/spdk_pid93911 00:27:09.136 Removing: /var/run/dpdk/spdk_pid93926 00:27:09.136 Removing: /var/run/dpdk/spdk_pid93951 00:27:09.136 Removing: /var/run/dpdk/spdk_pid93964 00:27:09.136 Removing: /var/run/dpdk/spdk_pid94107 00:27:09.136 Removing: /var/run/dpdk/spdk_pid94109 00:27:09.396 Removing: /var/run/dpdk/spdk_pid94217 00:27:09.396 Removing: /var/run/dpdk/spdk_pid94225 00:27:09.396 Removing: /var/run/dpdk/spdk_pid94332 00:27:09.396 Removing: /var/run/dpdk/spdk_pid94335 00:27:09.396 Removing: /var/run/dpdk/spdk_pid94814 00:27:09.396 Removing: /var/run/dpdk/spdk_pid94863 00:27:09.396 Removing: /var/run/dpdk/spdk_pid95014 00:27:09.396 Removing: /var/run/dpdk/spdk_pid95135 00:27:09.396 Removing: /var/run/dpdk/spdk_pid95532 00:27:09.396 Removing: /var/run/dpdk/spdk_pid95778 00:27:09.396 Removing: /var/run/dpdk/spdk_pid96288 00:27:09.396 Removing: /var/run/dpdk/spdk_pid96852 00:27:09.396 Removing: /var/run/dpdk/spdk_pid97295 00:27:09.396 Removing: /var/run/dpdk/spdk_pid97384 00:27:09.396 Removing: /var/run/dpdk/spdk_pid97462 00:27:09.396 Removing: /var/run/dpdk/spdk_pid97548 00:27:09.396 Removing: /var/run/dpdk/spdk_pid97692 00:27:09.396 Removing: /var/run/dpdk/spdk_pid97782 00:27:09.396 Removing: /var/run/dpdk/spdk_pid97867 00:27:09.396 Removing: /var/run/dpdk/spdk_pid97957 00:27:09.396 Removing: /var/run/dpdk/spdk_pid98298 00:27:09.396 Removing: /var/run/dpdk/spdk_pid99008 00:27:09.396 Clean 00:27:09.396 killing process with pid 61853 00:27:09.396 killing process with pid 61856 00:27:09.396 10:14:07 -- common/autotest_common.sh@1446 -- # return 0 00:27:09.396 10:14:07 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:09.396 10:14:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:09.396 10:14:07 -- common/autotest_common.sh@10 -- # set +x 00:27:09.396 10:14:08 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:09.396 10:14:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:09.396 10:14:08 -- common/autotest_common.sh@10 -- # set +x 00:27:09.656 10:14:08 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:09.656 10:14:08 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:09.656 10:14:08 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:09.656 10:14:08 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:09.656 10:14:08 -- spdk/autotest.sh@383 -- # hostname 00:27:09.656 10:14:08 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:09.915 geninfo: WARNING: invalid characters removed from testname! 00:27:31.868 10:14:29 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:34.405 10:14:33 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:36.953 10:14:35 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:38.855 10:14:37 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:41.429 10:14:39 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:43.334 10:14:41 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:45.239 10:14:43 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:45.239 10:14:43 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:45.239 10:14:43 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:45.239 10:14:43 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:45.499 10:14:43 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:45.499 10:14:43 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:45.499 10:14:43 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:45.499 10:14:43 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:45.499 10:14:43 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:45.499 10:14:43 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:45.499 10:14:43 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:45.499 10:14:43 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:45.499 10:14:43 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:45.499 10:14:43 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:45.499 10:14:43 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:45.499 10:14:43 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:45.499 10:14:43 -- scripts/common.sh@343 -- $ case "$op" in 00:27:45.499 10:14:43 -- scripts/common.sh@344 -- $ : 1 00:27:45.499 10:14:43 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:45.499 10:14:43 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.499 10:14:43 -- scripts/common.sh@364 -- $ decimal 1 00:27:45.499 10:14:43 -- scripts/common.sh@352 -- $ local d=1 00:27:45.499 10:14:43 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:45.499 10:14:43 -- scripts/common.sh@354 -- $ echo 1 00:27:45.499 10:14:43 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:45.499 10:14:43 -- scripts/common.sh@365 -- $ decimal 2 00:27:45.499 10:14:43 -- scripts/common.sh@352 -- $ local d=2 00:27:45.499 10:14:43 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:45.499 10:14:43 -- scripts/common.sh@354 -- $ echo 2 00:27:45.499 10:14:43 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:45.499 10:14:43 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:45.499 10:14:43 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:45.499 10:14:43 -- scripts/common.sh@367 -- $ return 0 00:27:45.499 10:14:43 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.499 10:14:43 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:45.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.499 --rc genhtml_branch_coverage=1 00:27:45.499 --rc genhtml_function_coverage=1 00:27:45.499 --rc genhtml_legend=1 00:27:45.499 --rc geninfo_all_blocks=1 00:27:45.499 --rc geninfo_unexecuted_blocks=1 00:27:45.499 00:27:45.499 ' 00:27:45.499 10:14:43 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:45.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.499 --rc genhtml_branch_coverage=1 00:27:45.499 --rc genhtml_function_coverage=1 00:27:45.499 --rc genhtml_legend=1 00:27:45.499 --rc geninfo_all_blocks=1 00:27:45.499 --rc geninfo_unexecuted_blocks=1 00:27:45.499 00:27:45.499 ' 00:27:45.499 10:14:43 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:45.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.499 --rc genhtml_branch_coverage=1 00:27:45.499 --rc genhtml_function_coverage=1 00:27:45.499 --rc genhtml_legend=1 00:27:45.499 --rc geninfo_all_blocks=1 00:27:45.499 --rc geninfo_unexecuted_blocks=1 00:27:45.499 00:27:45.499 ' 00:27:45.499 10:14:43 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:45.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.499 --rc genhtml_branch_coverage=1 00:27:45.499 --rc genhtml_function_coverage=1 00:27:45.499 --rc genhtml_legend=1 00:27:45.499 --rc geninfo_all_blocks=1 00:27:45.499 --rc geninfo_unexecuted_blocks=1 00:27:45.499 00:27:45.499 ' 00:27:45.499 10:14:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:45.499 10:14:43 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:45.499 10:14:43 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.499 10:14:43 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.499 10:14:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.499 10:14:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.499 10:14:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.499 10:14:43 -- paths/export.sh@5 -- $ export PATH 00:27:45.499 10:14:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.499 10:14:43 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:45.499 10:14:43 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:45.499 10:14:43 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734344083.XXXXXX 00:27:45.499 10:14:43 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734344083.MAf2GF 00:27:45.499 10:14:43 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:45.499 10:14:43 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:27:45.499 10:14:43 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:45.499 10:14:43 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:45.499 10:14:43 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:45.499 10:14:43 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:45.499 10:14:43 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:45.499 10:14:43 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:45.499 10:14:43 -- common/autotest_common.sh@10 -- $ set +x 00:27:45.499 10:14:43 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:45.499 10:14:43 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:45.499 10:14:43 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:45.499 10:14:43 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:45.499 10:14:43 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:45.499 10:14:43 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:45.499 10:14:43 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:45.499 10:14:43 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:45.499 10:14:43 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:45.499 10:14:43 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:45.499 10:14:44 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:45.499 + [[ -n 5975 ]] 00:27:45.499 + sudo kill 5975 00:27:45.768 [Pipeline] } 00:27:45.786 [Pipeline] // timeout 00:27:45.792 [Pipeline] } 00:27:45.807 [Pipeline] // stage 00:27:45.813 [Pipeline] } 00:27:45.828 [Pipeline] // catchError 00:27:45.838 [Pipeline] stage 00:27:45.840 [Pipeline] { (Stop VM) 00:27:45.853 [Pipeline] sh 00:27:46.136 + vagrant halt 00:27:49.425 ==> default: Halting domain... 00:27:54.736 [Pipeline] sh 00:27:55.011 + vagrant destroy -f 00:27:58.294 ==> default: Removing domain... 00:27:58.306 [Pipeline] sh 00:27:58.587 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:58.596 [Pipeline] } 00:27:58.610 [Pipeline] // stage 00:27:58.616 [Pipeline] } 00:27:58.630 [Pipeline] // dir 00:27:58.635 [Pipeline] } 00:27:58.649 [Pipeline] // wrap 00:27:58.655 [Pipeline] } 00:27:58.668 [Pipeline] // catchError 00:27:58.677 [Pipeline] stage 00:27:58.679 [Pipeline] { (Epilogue) 00:27:58.692 [Pipeline] sh 00:27:58.972 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:04.257 [Pipeline] catchError 00:28:04.259 [Pipeline] { 00:28:04.273 [Pipeline] sh 00:28:04.554 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:04.813 Artifacts sizes are good 00:28:04.822 [Pipeline] } 00:28:04.836 [Pipeline] // catchError 00:28:04.847 [Pipeline] archiveArtifacts 00:28:04.854 Archiving artifacts 00:28:04.995 [Pipeline] cleanWs 00:28:05.010 [WS-CLEANUP] Deleting project workspace... 00:28:05.010 [WS-CLEANUP] Deferred wipeout is used... 00:28:05.037 [WS-CLEANUP] done 00:28:05.039 [Pipeline] } 00:28:05.053 [Pipeline] // stage 00:28:05.059 [Pipeline] } 00:28:05.073 [Pipeline] // node 00:28:05.079 [Pipeline] End of Pipeline 00:28:05.137 Finished: SUCCESS